Nov 26 11:07:06 localhost kernel: Linux version 5.14.0-642.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-68.el9) #1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025
Nov 26 11:07:06 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Nov 26 11:07:06 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 26 11:07:06 localhost kernel: BIOS-provided physical RAM map:
Nov 26 11:07:06 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Nov 26 11:07:06 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Nov 26 11:07:06 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Nov 26 11:07:06 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable
Nov 26 11:07:06 localhost kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved
Nov 26 11:07:06 localhost kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved
Nov 26 11:07:06 localhost kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved
Nov 26 11:07:06 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Nov 26 11:07:06 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Nov 26 11:07:06 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000027fffffff] usable
Nov 26 11:07:06 localhost kernel: NX (Execute Disable) protection: active
Nov 26 11:07:06 localhost kernel: APIC: Static calls initialized
Nov 26 11:07:06 localhost kernel: SMBIOS 2.8 present.
Nov 26 11:07:06 localhost kernel: DMI: Red Hat OpenStack Compute/RHEL, BIOS 1.16.1-1.el9 04/01/2014
Nov 26 11:07:06 localhost kernel: Hypervisor detected: KVM
Nov 26 11:07:06 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Nov 26 11:07:06 localhost kernel: kvm-clock: using sched offset of 3200372947 cycles
Nov 26 11:07:06 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Nov 26 11:07:06 localhost kernel: tsc: Detected 2445.406 MHz processor
Nov 26 11:07:06 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Nov 26 11:07:06 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Nov 26 11:07:06 localhost kernel: last_pfn = 0x280000 max_arch_pfn = 0x400000000
Nov 26 11:07:06 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Nov 26 11:07:06 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Nov 26 11:07:06 localhost kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000
Nov 26 11:07:06 localhost kernel: found SMP MP-table at [mem 0x000f5b60-0x000f5b6f]
Nov 26 11:07:06 localhost kernel: Using GB pages for direct mapping
Nov 26 11:07:06 localhost kernel: RAMDISK: [mem 0x2d83a000-0x32c14fff]
Nov 26 11:07:06 localhost kernel: ACPI: Early table checksum verification disabled
Nov 26 11:07:06 localhost kernel: ACPI: RSDP 0x00000000000F5B20 000014 (v00 BOCHS )
Nov 26 11:07:06 localhost kernel: ACPI: RSDT 0x000000007FFE35EB 000034 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 26 11:07:06 localhost kernel: ACPI: FACP 0x000000007FFE3403 0000F4 (v03 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 26 11:07:06 localhost kernel: ACPI: DSDT 0x000000007FFDFCC0 003743 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 26 11:07:06 localhost kernel: ACPI: FACS 0x000000007FFDFC80 000040
Nov 26 11:07:06 localhost kernel: ACPI: APIC 0x000000007FFE34F7 000090 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 26 11:07:06 localhost kernel: ACPI: MCFG 0x000000007FFE3587 00003C (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 26 11:07:06 localhost kernel: ACPI: WAET 0x000000007FFE35C3 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 26 11:07:06 localhost kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe3403-0x7ffe34f6]
Nov 26 11:07:06 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfcc0-0x7ffe3402]
Nov 26 11:07:06 localhost kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfc80-0x7ffdfcbf]
Nov 26 11:07:06 localhost kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe34f7-0x7ffe3586]
Nov 26 11:07:06 localhost kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe3587-0x7ffe35c2]
Nov 26 11:07:06 localhost kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe35c3-0x7ffe35ea]
Nov 26 11:07:06 localhost kernel: No NUMA configuration found
Nov 26 11:07:06 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000027fffffff]
Nov 26 11:07:06 localhost kernel: NODE_DATA(0) allocated [mem 0x27ffd3000-0x27fffdfff]
Nov 26 11:07:06 localhost kernel: crashkernel reserved: 0x0000000060000000 - 0x0000000070000000 (256 MB)
Nov 26 11:07:06 localhost kernel: Zone ranges:
Nov 26 11:07:06 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Nov 26 11:07:06 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Nov 26 11:07:06 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000027fffffff]
Nov 26 11:07:06 localhost kernel:   Device   empty
Nov 26 11:07:06 localhost kernel: Movable zone start for each node
Nov 26 11:07:06 localhost kernel: Early memory node ranges
Nov 26 11:07:06 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Nov 26 11:07:06 localhost kernel:   node   0: [mem 0x0000000000100000-0x000000007ffdafff]
Nov 26 11:07:06 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000027fffffff]
Nov 26 11:07:06 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000027fffffff]
Nov 26 11:07:06 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Nov 26 11:07:06 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Nov 26 11:07:06 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Nov 26 11:07:06 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Nov 26 11:07:06 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Nov 26 11:07:06 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Nov 26 11:07:06 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Nov 26 11:07:06 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Nov 26 11:07:06 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Nov 26 11:07:06 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Nov 26 11:07:06 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Nov 26 11:07:06 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Nov 26 11:07:06 localhost kernel: TSC deadline timer available
Nov 26 11:07:06 localhost kernel: CPU topo: Max. logical packages:   4
Nov 26 11:07:06 localhost kernel: CPU topo: Max. logical dies:       4
Nov 26 11:07:06 localhost kernel: CPU topo: Max. dies per package:   1
Nov 26 11:07:06 localhost kernel: CPU topo: Max. threads per core:   1
Nov 26 11:07:06 localhost kernel: CPU topo: Num. cores per package:     1
Nov 26 11:07:06 localhost kernel: CPU topo: Num. threads per package:   1
Nov 26 11:07:06 localhost kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs
Nov 26 11:07:06 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Nov 26 11:07:06 localhost kernel: kvm-guest: KVM setup pv remote TLB flush
Nov 26 11:07:06 localhost kernel: kvm-guest: setup PV sched yield
Nov 26 11:07:06 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Nov 26 11:07:06 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Nov 26 11:07:06 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Nov 26 11:07:06 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Nov 26 11:07:06 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x7ffdb000-0x7fffffff]
Nov 26 11:07:06 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x80000000-0xafffffff]
Nov 26 11:07:06 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xb0000000-0xbfffffff]
Nov 26 11:07:06 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfed1bfff]
Nov 26 11:07:06 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfed1c000-0xfed1ffff]
Nov 26 11:07:06 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfed20000-0xfeffbfff]
Nov 26 11:07:06 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Nov 26 11:07:06 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Nov 26 11:07:06 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Nov 26 11:07:06 localhost kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices
Nov 26 11:07:06 localhost kernel: Booting paravirtualized kernel on KVM
Nov 26 11:07:06 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Nov 26 11:07:06 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1
Nov 26 11:07:06 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u524288
Nov 26 11:07:06 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u524288 alloc=1*2097152
Nov 26 11:07:06 localhost kernel: pcpu-alloc: [0] 0 1 2 3 
Nov 26 11:07:06 localhost kernel: kvm-guest: PV spinlocks enabled
Nov 26 11:07:06 localhost kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear)
Nov 26 11:07:06 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 26 11:07:06 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64", will be passed to user space.
Nov 26 11:07:06 localhost kernel: random: crng init done
Nov 26 11:07:06 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Nov 26 11:07:06 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Nov 26 11:07:06 localhost kernel: Fallback order for Node 0: 0 
Nov 26 11:07:06 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Nov 26 11:07:06 localhost kernel: Policy zone: Normal
Nov 26 11:07:06 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Nov 26 11:07:06 localhost kernel: software IO TLB: area num 4.
Nov 26 11:07:06 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1
Nov 26 11:07:06 localhost kernel: ftrace: allocating 49313 entries in 193 pages
Nov 26 11:07:06 localhost kernel: ftrace: allocated 193 pages with 3 groups
Nov 26 11:07:06 localhost kernel: Dynamic Preempt: voluntary
Nov 26 11:07:06 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Nov 26 11:07:06 localhost kernel: rcu:         RCU event tracing is enabled.
Nov 26 11:07:06 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=4.
Nov 26 11:07:06 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Nov 26 11:07:06 localhost kernel:         Rude variant of Tasks RCU enabled.
Nov 26 11:07:06 localhost kernel:         Tracing variant of Tasks RCU enabled.
Nov 26 11:07:06 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Nov 26 11:07:06 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4
Nov 26 11:07:06 localhost kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Nov 26 11:07:06 localhost kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Nov 26 11:07:06 localhost kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Nov 26 11:07:06 localhost kernel: NR_IRQS: 524544, nr_irqs: 456, preallocated irqs: 16
Nov 26 11:07:06 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Nov 26 11:07:06 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Nov 26 11:07:06 localhost kernel: Console: colour VGA+ 80x25
Nov 26 11:07:06 localhost kernel: printk: console [ttyS0] enabled
Nov 26 11:07:06 localhost kernel: ACPI: Core revision 20230331
Nov 26 11:07:06 localhost kernel: APIC: Switch to symmetric I/O mode setup
Nov 26 11:07:06 localhost kernel: x2apic enabled
Nov 26 11:07:06 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Nov 26 11:07:06 localhost kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask()
Nov 26 11:07:06 localhost kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself()
Nov 26 11:07:06 localhost kernel: kvm-guest: setup PV IPIs
Nov 26 11:07:06 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Nov 26 11:07:06 localhost kernel: Calibrating delay loop (skipped) preset value.. 4890.81 BogoMIPS (lpj=2445406)
Nov 26 11:07:06 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Nov 26 11:07:06 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Nov 26 11:07:06 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Nov 26 11:07:06 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Nov 26 11:07:06 localhost kernel: Spectre V2 : Mitigation: Retpolines
Nov 26 11:07:06 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Nov 26 11:07:06 localhost kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls
Nov 26 11:07:06 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Nov 26 11:07:06 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Nov 26 11:07:06 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Nov 26 11:07:06 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Nov 26 11:07:06 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Nov 26 11:07:06 localhost kernel: Transient Scheduler Attacks: Vulnerable: No microcode
Nov 26 11:07:06 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Nov 26 11:07:06 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Nov 26 11:07:06 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Nov 26 11:07:06 localhost kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers'
Nov 26 11:07:06 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Nov 26 11:07:06 localhost kernel: x86/fpu: xstate_offset[9]:  832, xstate_sizes[9]:    8
Nov 26 11:07:06 localhost kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format.
Nov 26 11:07:06 localhost kernel: Freeing SMP alternatives memory: 40K
Nov 26 11:07:06 localhost kernel: pid_max: default: 32768 minimum: 301
Nov 26 11:07:06 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Nov 26 11:07:06 localhost kernel: landlock: Up and running.
Nov 26 11:07:06 localhost kernel: Yama: becoming mindful.
Nov 26 11:07:06 localhost kernel: SELinux:  Initializing.
Nov 26 11:07:06 localhost kernel: LSM support for eBPF active
Nov 26 11:07:06 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 26 11:07:06 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 26 11:07:06 localhost kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1)
Nov 26 11:07:06 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Nov 26 11:07:06 localhost kernel: ... version:                0
Nov 26 11:07:06 localhost kernel: ... bit width:              48
Nov 26 11:07:06 localhost kernel: ... generic registers:      6
Nov 26 11:07:06 localhost kernel: ... value mask:             0000ffffffffffff
Nov 26 11:07:06 localhost kernel: ... max period:             00007fffffffffff
Nov 26 11:07:06 localhost kernel: ... fixed-purpose events:   0
Nov 26 11:07:06 localhost kernel: ... event mask:             000000000000003f
Nov 26 11:07:06 localhost kernel: signal: max sigframe size: 3376
Nov 26 11:07:06 localhost kernel: rcu: Hierarchical SRCU implementation.
Nov 26 11:07:06 localhost kernel: rcu:         Max phase no-delay instances is 400.
Nov 26 11:07:06 localhost kernel: smp: Bringing up secondary CPUs ...
Nov 26 11:07:06 localhost kernel: smpboot: x86: Booting SMP configuration:
Nov 26 11:07:06 localhost kernel: .... node  #0, CPUs:      #1 #2 #3
Nov 26 11:07:06 localhost kernel: smp: Brought up 1 node, 4 CPUs
Nov 26 11:07:06 localhost kernel: smpboot: Total of 4 processors activated (19563.24 BogoMIPS)
Nov 26 11:07:06 localhost kernel: node 0 deferred pages initialised in 7ms
Nov 26 11:07:06 localhost kernel: Memory: 7768304K/8388068K available (16384K kernel code, 5787K rwdata, 13900K rodata, 4192K init, 7172K bss, 615232K reserved, 0K cma-reserved)
Nov 26 11:07:06 localhost kernel: devtmpfs: initialized
Nov 26 11:07:06 localhost kernel: x86/mm: Memory block size: 128MB
Nov 26 11:07:06 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Nov 26 11:07:06 localhost kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear)
Nov 26 11:07:06 localhost kernel: pinctrl core: initialized pinctrl subsystem
Nov 26 11:07:06 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Nov 26 11:07:06 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Nov 26 11:07:06 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Nov 26 11:07:06 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Nov 26 11:07:06 localhost kernel: audit: initializing netlink subsys (disabled)
Nov 26 11:07:06 localhost kernel: audit: type=2000 audit(1764155226.474:1): state=initialized audit_enabled=0 res=1
Nov 26 11:07:06 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Nov 26 11:07:06 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Nov 26 11:07:06 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Nov 26 11:07:06 localhost kernel: cpuidle: using governor menu
Nov 26 11:07:06 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Nov 26 11:07:06 localhost kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff]
Nov 26 11:07:06 localhost kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry
Nov 26 11:07:06 localhost kernel: PCI: Using configuration type 1 for base access
Nov 26 11:07:06 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Nov 26 11:07:06 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Nov 26 11:07:06 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Nov 26 11:07:06 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Nov 26 11:07:06 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Nov 26 11:07:06 localhost kernel: Demotion targets for Node 0: null
Nov 26 11:07:06 localhost kernel: cryptd: max_cpu_qlen set to 1000
Nov 26 11:07:06 localhost kernel: ACPI: Added _OSI(Module Device)
Nov 26 11:07:06 localhost kernel: ACPI: Added _OSI(Processor Device)
Nov 26 11:07:06 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Nov 26 11:07:06 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Nov 26 11:07:06 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Nov 26 11:07:06 localhost kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Nov 26 11:07:06 localhost kernel: ACPI: Interpreter enabled
Nov 26 11:07:06 localhost kernel: ACPI: PM: (supports S0 S5)
Nov 26 11:07:06 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Nov 26 11:07:06 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Nov 26 11:07:06 localhost kernel: PCI: Using E820 reservations for host bridge windows
Nov 26 11:07:06 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 3F
Nov 26 11:07:06 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Nov 26 11:07:06 localhost kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Nov 26 11:07:06 localhost kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR DPC]
Nov 26 11:07:06 localhost kernel: acpi PNP0A08:00: _OSC: OS now controls [SHPCHotplug PME AER PCIeCapability]
Nov 26 11:07:06 localhost kernel: PCI host bridge to bus 0000:00
Nov 26 11:07:06 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Nov 26 11:07:06 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Nov 26 11:07:06 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Nov 26 11:07:06 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window]
Nov 26 11:07:06 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Nov 26 11:07:06 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x280000000-0xa7fffffff window]
Nov 26 11:07:06 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Nov 26 11:07:06 localhost kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint
Nov 26 11:07:06 localhost kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Nov 26 11:07:06 localhost kernel: pci 0000:00:01.0: BAR 0 [mem 0xf9800000-0xf9ffffff pref]
Nov 26 11:07:06 localhost kernel: pci 0000:00:01.0: BAR 2 [mem 0xfc200000-0xfc203fff 64bit pref]
Nov 26 11:07:06 localhost kernel: pci 0000:00:01.0: BAR 4 [mem 0xfea10000-0xfea10fff]
Nov 26 11:07:06 localhost kernel: pci 0000:00:01.0: ROM [mem 0xfea00000-0xfea0ffff pref]
Nov 26 11:07:06 localhost kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfea11000-0xfea11fff]
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02]
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.0:   bridge window [io  0xc000-0xcfff]
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.0:   bridge window [mem 0xfc600000-0xfc9fffff]
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.0:   bridge window [mem 0xfc000000-0xfc1fffff 64bit pref]
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.1: BAR 0 [mem 0xfea12000-0xfea12fff]
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.1: PCI bridge to [bus 03]
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.1:   bridge window [mem 0xfe800000-0xfe9fffff]
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.1:   bridge window [mem 0xfbe00000-0xfbffffff 64bit pref]
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.2: BAR 0 [mem 0xfea13000-0xfea13fff]
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.2: PCI bridge to [bus 04]
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.2:   bridge window [mem 0xfe600000-0xfe7fffff]
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.2:   bridge window [mem 0xfbc00000-0xfbdfffff 64bit pref]
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.3: BAR 0 [mem 0xfea14000-0xfea14fff]
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.3: PCI bridge to [bus 05]
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.3:   bridge window [mem 0xfe400000-0xfe5fffff]
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.3:   bridge window [mem 0xfba00000-0xfbbfffff 64bit pref]
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.4: BAR 0 [mem 0xfea15000-0xfea15fff]
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.4: PCI bridge to [bus 06]
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.4:   bridge window [mem 0xfe200000-0xfe3fffff]
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.4:   bridge window [mem 0xfb800000-0xfb9fffff 64bit pref]
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.5: BAR 0 [mem 0xfea16000-0xfea16fff]
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.5: PCI bridge to [bus 07]
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.5:   bridge window [mem 0xfe000000-0xfe1fffff]
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.5:   bridge window [mem 0xfb600000-0xfb7fffff 64bit pref]
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.6: BAR 0 [mem 0xfea17000-0xfea17fff]
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.6: PCI bridge to [bus 08]
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.6:   bridge window [mem 0xfde00000-0xfdffffff]
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.6:   bridge window [mem 0xfb400000-0xfb5fffff 64bit pref]
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.7: BAR 0 [mem 0xfea18000-0xfea18fff]
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.7: PCI bridge to [bus 09]
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.7:   bridge window [mem 0xfdc00000-0xfddfffff]
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.7:   bridge window [mem 0xfb200000-0xfb3fffff 64bit pref]
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.0: BAR 0 [mem 0xfea19000-0xfea19fff]
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.0: PCI bridge to [bus 0a]
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.0:   bridge window [mem 0xfda00000-0xfdbfffff]
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.0:   bridge window [mem 0xfb000000-0xfb1fffff 64bit pref]
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.1: BAR 0 [mem 0xfea1a000-0xfea1afff]
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.1: PCI bridge to [bus 0b]
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.1:   bridge window [mem 0xfd800000-0xfd9fffff]
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.1:   bridge window [mem 0xfae00000-0xfaffffff 64bit pref]
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.2: BAR 0 [mem 0xfea1b000-0xfea1bfff]
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.2: PCI bridge to [bus 0c]
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.2:   bridge window [mem 0xfd600000-0xfd7fffff]
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.2:   bridge window [mem 0xfac00000-0xfadfffff 64bit pref]
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.3: BAR 0 [mem 0xfea1c000-0xfea1cfff]
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.3: PCI bridge to [bus 0d]
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.3:   bridge window [mem 0xfd400000-0xfd5fffff]
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.3:   bridge window [mem 0xfaa00000-0xfabfffff 64bit pref]
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.4: BAR 0 [mem 0xfea1d000-0xfea1dfff]
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.4: PCI bridge to [bus 0e]
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.4:   bridge window [mem 0xfd200000-0xfd3fffff]
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.4:   bridge window [mem 0xfa800000-0xfa9fffff 64bit pref]
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.5: BAR 0 [mem 0xfea1e000-0xfea1efff]
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.5: PCI bridge to [bus 0f]
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.5:   bridge window [mem 0xfd000000-0xfd1fffff]
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.5:   bridge window [mem 0xfa600000-0xfa7fffff 64bit pref]
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.6: BAR 0 [mem 0xfea1f000-0xfea1ffff]
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.6: PCI bridge to [bus 10]
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.6:   bridge window [mem 0xfce00000-0xfcffffff]
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.6:   bridge window [mem 0xfa400000-0xfa5fffff 64bit pref]
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.7: BAR 0 [mem 0xfea20000-0xfea20fff]
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.7: PCI bridge to [bus 11]
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.7:   bridge window [mem 0xfcc00000-0xfcdfffff]
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.7:   bridge window [mem 0xfa200000-0xfa3fffff 64bit pref]
Nov 26 11:07:06 localhost kernel: pci 0000:00:04.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 26 11:07:06 localhost kernel: pci 0000:00:04.0: BAR 0 [mem 0xfea21000-0xfea21fff]
Nov 26 11:07:06 localhost kernel: pci 0000:00:04.0: PCI bridge to [bus 12]
Nov 26 11:07:06 localhost kernel: pci 0000:00:04.0:   bridge window [mem 0xfca00000-0xfcbfffff]
Nov 26 11:07:06 localhost kernel: pci 0000:00:04.0:   bridge window [mem 0xfa000000-0xfa1fffff 64bit pref]
Nov 26 11:07:06 localhost kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint
Nov 26 11:07:06 localhost kernel: pci 0000:00:1f.0: quirk: [io  0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO
Nov 26 11:07:06 localhost kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint
Nov 26 11:07:06 localhost kernel: pci 0000:00:1f.2: BAR 4 [io  0xd040-0xd05f]
Nov 26 11:07:06 localhost kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfea22000-0xfea22fff]
Nov 26 11:07:06 localhost kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint
Nov 26 11:07:06 localhost kernel: pci 0000:00:1f.3: BAR 4 [io  0x0700-0x073f]
Nov 26 11:07:06 localhost kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 PCIe to PCI/PCI-X bridge
Nov 26 11:07:06 localhost kernel: pci 0000:01:00.0: BAR 0 [mem 0xfc800000-0xfc8000ff 64bit]
Nov 26 11:07:06 localhost kernel: pci 0000:01:00.0: PCI bridge to [bus 02]
Nov 26 11:07:06 localhost kernel: pci 0000:01:00.0:   bridge window [io  0xc000-0xcfff]
Nov 26 11:07:06 localhost kernel: pci 0000:01:00.0:   bridge window [mem 0xfc600000-0xfc7fffff]
Nov 26 11:07:06 localhost kernel: pci 0000:01:00.0:   bridge window [mem 0xfc000000-0xfc1fffff 64bit pref]
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02]
Nov 26 11:07:06 localhost kernel: pci_bus 0000:02: extended config space not accessible
Nov 26 11:07:06 localhost kernel: acpiphp: Slot [0] registered
Nov 26 11:07:06 localhost kernel: acpiphp: Slot [1] registered
Nov 26 11:07:06 localhost kernel: acpiphp: Slot [2] registered
Nov 26 11:07:06 localhost kernel: acpiphp: Slot [3] registered
Nov 26 11:07:06 localhost kernel: acpiphp: Slot [4] registered
Nov 26 11:07:06 localhost kernel: acpiphp: Slot [5] registered
Nov 26 11:07:06 localhost kernel: acpiphp: Slot [6] registered
Nov 26 11:07:06 localhost kernel: acpiphp: Slot [7] registered
Nov 26 11:07:06 localhost kernel: acpiphp: Slot [8] registered
Nov 26 11:07:06 localhost kernel: acpiphp: Slot [9] registered
Nov 26 11:07:06 localhost kernel: acpiphp: Slot [10] registered
Nov 26 11:07:06 localhost kernel: acpiphp: Slot [11] registered
Nov 26 11:07:06 localhost kernel: acpiphp: Slot [12] registered
Nov 26 11:07:06 localhost kernel: acpiphp: Slot [13] registered
Nov 26 11:07:06 localhost kernel: acpiphp: Slot [14] registered
Nov 26 11:07:06 localhost kernel: acpiphp: Slot [15] registered
Nov 26 11:07:06 localhost kernel: acpiphp: Slot [16] registered
Nov 26 11:07:06 localhost kernel: acpiphp: Slot [17] registered
Nov 26 11:07:06 localhost kernel: acpiphp: Slot [18] registered
Nov 26 11:07:06 localhost kernel: acpiphp: Slot [19] registered
Nov 26 11:07:06 localhost kernel: acpiphp: Slot [20] registered
Nov 26 11:07:06 localhost kernel: acpiphp: Slot [21] registered
Nov 26 11:07:06 localhost kernel: acpiphp: Slot [22] registered
Nov 26 11:07:06 localhost kernel: acpiphp: Slot [23] registered
Nov 26 11:07:06 localhost kernel: acpiphp: Slot [24] registered
Nov 26 11:07:06 localhost kernel: acpiphp: Slot [25] registered
Nov 26 11:07:06 localhost kernel: acpiphp: Slot [26] registered
Nov 26 11:07:06 localhost kernel: acpiphp: Slot [27] registered
Nov 26 11:07:06 localhost kernel: acpiphp: Slot [28] registered
Nov 26 11:07:06 localhost kernel: acpiphp: Slot [29] registered
Nov 26 11:07:06 localhost kernel: acpiphp: Slot [30] registered
Nov 26 11:07:06 localhost kernel: acpiphp: Slot [31] registered
Nov 26 11:07:06 localhost kernel: pci 0000:02:01.0: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Nov 26 11:07:06 localhost kernel: pci 0000:02:01.0: BAR 4 [io  0xc000-0xc01f]
Nov 26 11:07:06 localhost kernel: pci 0000:01:00.0: PCI bridge to [bus 02]
Nov 26 11:07:06 localhost kernel: acpiphp: Slot [0-2] registered
Nov 26 11:07:06 localhost kernel: pci 0000:03:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint
Nov 26 11:07:06 localhost kernel: pci 0000:03:00.0: BAR 1 [mem 0xfe840000-0xfe840fff]
Nov 26 11:07:06 localhost kernel: pci 0000:03:00.0: BAR 4 [mem 0xfbe00000-0xfbe03fff 64bit pref]
Nov 26 11:07:06 localhost kernel: pci 0000:03:00.0: ROM [mem 0xfe800000-0xfe83ffff pref]
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.1: PCI bridge to [bus 03]
Nov 26 11:07:06 localhost kernel: acpiphp: Slot [0-3] registered
Nov 26 11:07:06 localhost kernel: pci 0000:04:00.0: [1af4:1042] type 00 class 0x010000 PCIe Endpoint
Nov 26 11:07:06 localhost kernel: pci 0000:04:00.0: BAR 1 [mem 0xfe600000-0xfe600fff]
Nov 26 11:07:06 localhost kernel: pci 0000:04:00.0: BAR 4 [mem 0xfbc00000-0xfbc03fff 64bit pref]
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.2: PCI bridge to [bus 04]
Nov 26 11:07:06 localhost kernel: acpiphp: Slot [0-4] registered
Nov 26 11:07:06 localhost kernel: pci 0000:05:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint
Nov 26 11:07:06 localhost kernel: pci 0000:05:00.0: BAR 4 [mem 0xfba00000-0xfba03fff 64bit pref]
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.3: PCI bridge to [bus 05]
Nov 26 11:07:06 localhost kernel: acpiphp: Slot [0-5] registered
Nov 26 11:07:06 localhost kernel: pci 0000:06:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint
Nov 26 11:07:06 localhost kernel: pci 0000:06:00.0: BAR 4 [mem 0xfb800000-0xfb803fff 64bit pref]
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.4: PCI bridge to [bus 06]
Nov 26 11:07:06 localhost kernel: acpiphp: Slot [0-6] registered
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.5: PCI bridge to [bus 07]
Nov 26 11:07:06 localhost kernel: acpiphp: Slot [0-7] registered
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.6: PCI bridge to [bus 08]
Nov 26 11:07:06 localhost kernel: acpiphp: Slot [0-8] registered
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.7: PCI bridge to [bus 09]
Nov 26 11:07:06 localhost kernel: acpiphp: Slot [0-9] registered
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.0: PCI bridge to [bus 0a]
Nov 26 11:07:06 localhost kernel: acpiphp: Slot [0-10] registered
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.1: PCI bridge to [bus 0b]
Nov 26 11:07:06 localhost kernel: acpiphp: Slot [0-11] registered
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.2: PCI bridge to [bus 0c]
Nov 26 11:07:06 localhost kernel: acpiphp: Slot [0-12] registered
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.3: PCI bridge to [bus 0d]
Nov 26 11:07:06 localhost kernel: acpiphp: Slot [0-13] registered
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.4: PCI bridge to [bus 0e]
Nov 26 11:07:06 localhost kernel: acpiphp: Slot [0-14] registered
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.5: PCI bridge to [bus 0f]
Nov 26 11:07:06 localhost kernel: acpiphp: Slot [0-15] registered
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.6: PCI bridge to [bus 10]
Nov 26 11:07:06 localhost kernel: acpiphp: Slot [0-16] registered
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.7: PCI bridge to [bus 11]
Nov 26 11:07:06 localhost kernel: acpiphp: Slot [0-17] registered
Nov 26 11:07:06 localhost kernel: pci 0000:00:04.0: PCI bridge to [bus 12]
Nov 26 11:07:06 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Nov 26 11:07:06 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Nov 26 11:07:06 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Nov 26 11:07:06 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Nov 26 11:07:06 localhost kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10
Nov 26 11:07:06 localhost kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10
Nov 26 11:07:06 localhost kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11
Nov 26 11:07:06 localhost kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11
Nov 26 11:07:06 localhost kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16
Nov 26 11:07:06 localhost kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17
Nov 26 11:07:06 localhost kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18
Nov 26 11:07:06 localhost kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19
Nov 26 11:07:06 localhost kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20
Nov 26 11:07:06 localhost kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21
Nov 26 11:07:06 localhost kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22
Nov 26 11:07:06 localhost kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23
Nov 26 11:07:06 localhost kernel: iommu: Default domain type: Translated
Nov 26 11:07:06 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Nov 26 11:07:06 localhost kernel: SCSI subsystem initialized
Nov 26 11:07:06 localhost kernel: ACPI: bus type USB registered
Nov 26 11:07:06 localhost kernel: usbcore: registered new interface driver usbfs
Nov 26 11:07:06 localhost kernel: usbcore: registered new interface driver hub
Nov 26 11:07:06 localhost kernel: usbcore: registered new device driver usb
Nov 26 11:07:06 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Nov 26 11:07:06 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Nov 26 11:07:06 localhost kernel: PTP clock support registered
Nov 26 11:07:06 localhost kernel: EDAC MC: Ver: 3.0.0
Nov 26 11:07:06 localhost kernel: NetLabel: Initializing
Nov 26 11:07:06 localhost kernel: NetLabel:  domain hash size = 128
Nov 26 11:07:06 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Nov 26 11:07:06 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Nov 26 11:07:06 localhost kernel: PCI: Using ACPI for IRQ routing
Nov 26 11:07:06 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Nov 26 11:07:06 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Nov 26 11:07:06 localhost kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff]
Nov 26 11:07:06 localhost kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device
Nov 26 11:07:06 localhost kernel: pci 0000:00:01.0: vgaarb: bridge control possible
Nov 26 11:07:06 localhost kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Nov 26 11:07:06 localhost kernel: vgaarb: loaded
Nov 26 11:07:06 localhost kernel: clocksource: Switched to clocksource kvm-clock
Nov 26 11:07:06 localhost kernel: VFS: Disk quotas dquot_6.6.0
Nov 26 11:07:06 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Nov 26 11:07:06 localhost kernel: pnp: PnP ACPI init
Nov 26 11:07:06 localhost kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved
Nov 26 11:07:06 localhost kernel: pnp: PnP ACPI: found 5 devices
Nov 26 11:07:06 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Nov 26 11:07:06 localhost kernel: NET: Registered PF_INET protocol family
Nov 26 11:07:06 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Nov 26 11:07:06 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Nov 26 11:07:06 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Nov 26 11:07:06 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Nov 26 11:07:06 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Nov 26 11:07:06 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Nov 26 11:07:06 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Nov 26 11:07:06 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 26 11:07:06 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 26 11:07:06 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Nov 26 11:07:06 localhost kernel: NET: Registered PF_XDP protocol family
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.1: bridge window [io  0x1000-0x0fff] to [bus 03] add_size 1000
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.2: bridge window [io  0x1000-0x0fff] to [bus 04] add_size 1000
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.3: bridge window [io  0x1000-0x0fff] to [bus 05] add_size 1000
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.4: bridge window [io  0x1000-0x0fff] to [bus 06] add_size 1000
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.5: bridge window [io  0x1000-0x0fff] to [bus 07] add_size 1000
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.6: bridge window [io  0x1000-0x0fff] to [bus 08] add_size 1000
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.7: bridge window [io  0x1000-0x0fff] to [bus 09] add_size 1000
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.0: bridge window [io  0x1000-0x0fff] to [bus 0a] add_size 1000
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.1: bridge window [io  0x1000-0x0fff] to [bus 0b] add_size 1000
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.2: bridge window [io  0x1000-0x0fff] to [bus 0c] add_size 1000
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.3: bridge window [io  0x1000-0x0fff] to [bus 0d] add_size 1000
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.4: bridge window [io  0x1000-0x0fff] to [bus 0e] add_size 1000
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.5: bridge window [io  0x1000-0x0fff] to [bus 0f] add_size 1000
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.6: bridge window [io  0x1000-0x0fff] to [bus 10] add_size 1000
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.7: bridge window [io  0x1000-0x0fff] to [bus 11] add_size 1000
Nov 26 11:07:06 localhost kernel: pci 0000:00:04.0: bridge window [io  0x1000-0x0fff] to [bus 12] add_size 1000
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.1: bridge window [io  0x1000-0x1fff]: assigned
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.2: bridge window [io  0x2000-0x2fff]: assigned
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.3: bridge window [io  0x3000-0x3fff]: assigned
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.4: bridge window [io  0x4000-0x4fff]: assigned
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.5: bridge window [io  0x5000-0x5fff]: assigned
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.6: bridge window [io  0x6000-0x6fff]: assigned
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.7: bridge window [io  0x7000-0x7fff]: assigned
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.0: bridge window [io  0x8000-0x8fff]: assigned
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.1: bridge window [io  0x9000-0x9fff]: assigned
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.2: bridge window [io  0xa000-0xafff]: assigned
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.3: bridge window [io  0xb000-0xbfff]: assigned
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.4: bridge window [io  0xe000-0xefff]: assigned
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.5: bridge window [io  0xf000-0xffff]: assigned
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.6: bridge window [io  size 0x1000]: can't assign; no space
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.6: bridge window [io  size 0x1000]: failed to assign
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.7: bridge window [io  size 0x1000]: can't assign; no space
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.7: bridge window [io  size 0x1000]: failed to assign
Nov 26 11:07:06 localhost kernel: pci 0000:00:04.0: bridge window [io  size 0x1000]: can't assign; no space
Nov 26 11:07:06 localhost kernel: pci 0000:00:04.0: bridge window [io  size 0x1000]: failed to assign
Nov 26 11:07:06 localhost kernel: pci 0000:00:04.0: bridge window [io  0x1000-0x1fff]: assigned
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.7: bridge window [io  0x2000-0x2fff]: assigned
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.6: bridge window [io  0x3000-0x3fff]: assigned
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.5: bridge window [io  0x4000-0x4fff]: assigned
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.4: bridge window [io  0x5000-0x5fff]: assigned
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.3: bridge window [io  0x6000-0x6fff]: assigned
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.2: bridge window [io  0x7000-0x7fff]: assigned
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.1: bridge window [io  0x8000-0x8fff]: assigned
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.0: bridge window [io  0x9000-0x9fff]: assigned
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.7: bridge window [io  0xa000-0xafff]: assigned
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.6: bridge window [io  0xb000-0xbfff]: assigned
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.5: bridge window [io  0xe000-0xefff]: assigned
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.4: bridge window [io  0xf000-0xffff]: assigned
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.3: bridge window [io  size 0x1000]: can't assign; no space
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.3: bridge window [io  size 0x1000]: failed to assign
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.2: bridge window [io  size 0x1000]: can't assign; no space
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.2: bridge window [io  size 0x1000]: failed to assign
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.1: bridge window [io  size 0x1000]: can't assign; no space
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.1: bridge window [io  size 0x1000]: failed to assign
Nov 26 11:07:06 localhost kernel: pci 0000:01:00.0: PCI bridge to [bus 02]
Nov 26 11:07:06 localhost kernel: pci 0000:01:00.0:   bridge window [io  0xc000-0xcfff]
Nov 26 11:07:06 localhost kernel: pci 0000:01:00.0:   bridge window [mem 0xfc600000-0xfc7fffff]
Nov 26 11:07:06 localhost kernel: pci 0000:01:00.0:   bridge window [mem 0xfc000000-0xfc1fffff 64bit pref]
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02]
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.0:   bridge window [io  0xc000-0xcfff]
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.0:   bridge window [mem 0xfc600000-0xfc9fffff]
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.0:   bridge window [mem 0xfc000000-0xfc1fffff 64bit pref]
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.1: PCI bridge to [bus 03]
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.1:   bridge window [mem 0xfe800000-0xfe9fffff]
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.1:   bridge window [mem 0xfbe00000-0xfbffffff 64bit pref]
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.2: PCI bridge to [bus 04]
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.2:   bridge window [mem 0xfe600000-0xfe7fffff]
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.2:   bridge window [mem 0xfbc00000-0xfbdfffff 64bit pref]
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.3: PCI bridge to [bus 05]
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.3:   bridge window [mem 0xfe400000-0xfe5fffff]
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.3:   bridge window [mem 0xfba00000-0xfbbfffff 64bit pref]
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.4: PCI bridge to [bus 06]
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.4:   bridge window [io  0xf000-0xffff]
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.4:   bridge window [mem 0xfe200000-0xfe3fffff]
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.4:   bridge window [mem 0xfb800000-0xfb9fffff 64bit pref]
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.5: PCI bridge to [bus 07]
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.5:   bridge window [io  0xe000-0xefff]
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.5:   bridge window [mem 0xfe000000-0xfe1fffff]
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.5:   bridge window [mem 0xfb600000-0xfb7fffff 64bit pref]
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.6: PCI bridge to [bus 08]
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.6:   bridge window [io  0xb000-0xbfff]
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.6:   bridge window [mem 0xfde00000-0xfdffffff]
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.6:   bridge window [mem 0xfb400000-0xfb5fffff 64bit pref]
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.7: PCI bridge to [bus 09]
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.7:   bridge window [io  0xa000-0xafff]
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.7:   bridge window [mem 0xfdc00000-0xfddfffff]
Nov 26 11:07:06 localhost kernel: pci 0000:00:02.7:   bridge window [mem 0xfb200000-0xfb3fffff 64bit pref]
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.0: PCI bridge to [bus 0a]
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.0:   bridge window [io  0x9000-0x9fff]
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.0:   bridge window [mem 0xfda00000-0xfdbfffff]
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.0:   bridge window [mem 0xfb000000-0xfb1fffff 64bit pref]
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.1: PCI bridge to [bus 0b]
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.1:   bridge window [io  0x8000-0x8fff]
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.1:   bridge window [mem 0xfd800000-0xfd9fffff]
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.1:   bridge window [mem 0xfae00000-0xfaffffff 64bit pref]
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.2: PCI bridge to [bus 0c]
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.2:   bridge window [io  0x7000-0x7fff]
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.2:   bridge window [mem 0xfd600000-0xfd7fffff]
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.2:   bridge window [mem 0xfac00000-0xfadfffff 64bit pref]
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.3: PCI bridge to [bus 0d]
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.3:   bridge window [io  0x6000-0x6fff]
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.3:   bridge window [mem 0xfd400000-0xfd5fffff]
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.3:   bridge window [mem 0xfaa00000-0xfabfffff 64bit pref]
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.4: PCI bridge to [bus 0e]
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.4:   bridge window [io  0x5000-0x5fff]
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.4:   bridge window [mem 0xfd200000-0xfd3fffff]
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.4:   bridge window [mem 0xfa800000-0xfa9fffff 64bit pref]
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.5: PCI bridge to [bus 0f]
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.5:   bridge window [io  0x4000-0x4fff]
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.5:   bridge window [mem 0xfd000000-0xfd1fffff]
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.5:   bridge window [mem 0xfa600000-0xfa7fffff 64bit pref]
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.6: PCI bridge to [bus 10]
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.6:   bridge window [io  0x3000-0x3fff]
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.6:   bridge window [mem 0xfce00000-0xfcffffff]
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.6:   bridge window [mem 0xfa400000-0xfa5fffff 64bit pref]
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.7: PCI bridge to [bus 11]
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.7:   bridge window [io  0x2000-0x2fff]
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.7:   bridge window [mem 0xfcc00000-0xfcdfffff]
Nov 26 11:07:06 localhost kernel: pci 0000:00:03.7:   bridge window [mem 0xfa200000-0xfa3fffff 64bit pref]
Nov 26 11:07:06 localhost kernel: pci 0000:00:04.0: PCI bridge to [bus 12]
Nov 26 11:07:06 localhost kernel: pci 0000:00:04.0:   bridge window [io  0x1000-0x1fff]
Nov 26 11:07:06 localhost kernel: pci 0000:00:04.0:   bridge window [mem 0xfca00000-0xfcbfffff]
Nov 26 11:07:06 localhost kernel: pci 0000:00:04.0:   bridge window [mem 0xfa000000-0xfa1fffff 64bit pref]
Nov 26 11:07:06 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Nov 26 11:07:06 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Nov 26 11:07:06 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Nov 26 11:07:06 localhost kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window]
Nov 26 11:07:06 localhost kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window]
Nov 26 11:07:06 localhost kernel: pci_bus 0000:00: resource 9 [mem 0x280000000-0xa7fffffff window]
Nov 26 11:07:06 localhost kernel: pci_bus 0000:01: resource 0 [io  0xc000-0xcfff]
Nov 26 11:07:06 localhost kernel: pci_bus 0000:01: resource 1 [mem 0xfc600000-0xfc9fffff]
Nov 26 11:07:06 localhost kernel: pci_bus 0000:01: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref]
Nov 26 11:07:06 localhost kernel: pci_bus 0000:02: resource 0 [io  0xc000-0xcfff]
Nov 26 11:07:06 localhost kernel: pci_bus 0000:02: resource 1 [mem 0xfc600000-0xfc7fffff]
Nov 26 11:07:06 localhost kernel: pci_bus 0000:02: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref]
Nov 26 11:07:06 localhost kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff]
Nov 26 11:07:06 localhost kernel: pci_bus 0000:03: resource 2 [mem 0xfbe00000-0xfbffffff 64bit pref]
Nov 26 11:07:06 localhost kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff]
Nov 26 11:07:06 localhost kernel: pci_bus 0000:04: resource 2 [mem 0xfbc00000-0xfbdfffff 64bit pref]
Nov 26 11:07:06 localhost kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff]
Nov 26 11:07:06 localhost kernel: pci_bus 0000:05: resource 2 [mem 0xfba00000-0xfbbfffff 64bit pref]
Nov 26 11:07:06 localhost kernel: pci_bus 0000:06: resource 0 [io  0xf000-0xffff]
Nov 26 11:07:06 localhost kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff]
Nov 26 11:07:06 localhost kernel: pci_bus 0000:06: resource 2 [mem 0xfb800000-0xfb9fffff 64bit pref]
Nov 26 11:07:06 localhost kernel: pci_bus 0000:07: resource 0 [io  0xe000-0xefff]
Nov 26 11:07:06 localhost kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff]
Nov 26 11:07:06 localhost kernel: pci_bus 0000:07: resource 2 [mem 0xfb600000-0xfb7fffff 64bit pref]
Nov 26 11:07:06 localhost kernel: pci_bus 0000:08: resource 0 [io  0xb000-0xbfff]
Nov 26 11:07:06 localhost kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff]
Nov 26 11:07:06 localhost kernel: pci_bus 0000:08: resource 2 [mem 0xfb400000-0xfb5fffff 64bit pref]
Nov 26 11:07:06 localhost kernel: pci_bus 0000:09: resource 0 [io  0xa000-0xafff]
Nov 26 11:07:06 localhost kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff]
Nov 26 11:07:06 localhost kernel: pci_bus 0000:09: resource 2 [mem 0xfb200000-0xfb3fffff 64bit pref]
Nov 26 11:07:06 localhost kernel: pci_bus 0000:0a: resource 0 [io  0x9000-0x9fff]
Nov 26 11:07:06 localhost kernel: pci_bus 0000:0a: resource 1 [mem 0xfda00000-0xfdbfffff]
Nov 26 11:07:06 localhost kernel: pci_bus 0000:0a: resource 2 [mem 0xfb000000-0xfb1fffff 64bit pref]
Nov 26 11:07:06 localhost kernel: pci_bus 0000:0b: resource 0 [io  0x8000-0x8fff]
Nov 26 11:07:06 localhost kernel: pci_bus 0000:0b: resource 1 [mem 0xfd800000-0xfd9fffff]
Nov 26 11:07:06 localhost kernel: pci_bus 0000:0b: resource 2 [mem 0xfae00000-0xfaffffff 64bit pref]
Nov 26 11:07:06 localhost kernel: pci_bus 0000:0c: resource 0 [io  0x7000-0x7fff]
Nov 26 11:07:06 localhost kernel: pci_bus 0000:0c: resource 1 [mem 0xfd600000-0xfd7fffff]
Nov 26 11:07:06 localhost kernel: pci_bus 0000:0c: resource 2 [mem 0xfac00000-0xfadfffff 64bit pref]
Nov 26 11:07:06 localhost kernel: pci_bus 0000:0d: resource 0 [io  0x6000-0x6fff]
Nov 26 11:07:06 localhost kernel: pci_bus 0000:0d: resource 1 [mem 0xfd400000-0xfd5fffff]
Nov 26 11:07:06 localhost kernel: pci_bus 0000:0d: resource 2 [mem 0xfaa00000-0xfabfffff 64bit pref]
Nov 26 11:07:06 localhost kernel: pci_bus 0000:0e: resource 0 [io  0x5000-0x5fff]
Nov 26 11:07:06 localhost kernel: pci_bus 0000:0e: resource 1 [mem 0xfd200000-0xfd3fffff]
Nov 26 11:07:06 localhost kernel: pci_bus 0000:0e: resource 2 [mem 0xfa800000-0xfa9fffff 64bit pref]
Nov 26 11:07:06 localhost kernel: pci_bus 0000:0f: resource 0 [io  0x4000-0x4fff]
Nov 26 11:07:06 localhost kernel: pci_bus 0000:0f: resource 1 [mem 0xfd000000-0xfd1fffff]
Nov 26 11:07:06 localhost kernel: pci_bus 0000:0f: resource 2 [mem 0xfa600000-0xfa7fffff 64bit pref]
Nov 26 11:07:06 localhost kernel: pci_bus 0000:10: resource 0 [io  0x3000-0x3fff]
Nov 26 11:07:06 localhost kernel: pci_bus 0000:10: resource 1 [mem 0xfce00000-0xfcffffff]
Nov 26 11:07:06 localhost kernel: pci_bus 0000:10: resource 2 [mem 0xfa400000-0xfa5fffff 64bit pref]
Nov 26 11:07:06 localhost kernel: pci_bus 0000:11: resource 0 [io  0x2000-0x2fff]
Nov 26 11:07:06 localhost kernel: pci_bus 0000:11: resource 1 [mem 0xfcc00000-0xfcdfffff]
Nov 26 11:07:06 localhost kernel: pci_bus 0000:11: resource 2 [mem 0xfa200000-0xfa3fffff 64bit pref]
Nov 26 11:07:06 localhost kernel: pci_bus 0000:12: resource 0 [io  0x1000-0x1fff]
Nov 26 11:07:06 localhost kernel: pci_bus 0000:12: resource 1 [mem 0xfca00000-0xfcbfffff]
Nov 26 11:07:06 localhost kernel: pci_bus 0000:12: resource 2 [mem 0xfa000000-0xfa1fffff 64bit pref]
Nov 26 11:07:06 localhost kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22
Nov 26 11:07:06 localhost kernel: PCI: CLS 0 bytes, default 64
Nov 26 11:07:06 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Nov 26 11:07:06 localhost kernel: software IO TLB: mapped [mem 0x000000007bfdb000-0x000000007ffdb000] (64MB)
Nov 26 11:07:06 localhost kernel: ACPI: bus type thunderbolt registered
Nov 26 11:07:06 localhost kernel: Trying to unpack rootfs image as initramfs...
Nov 26 11:07:06 localhost kernel: Initialise system trusted keyrings
Nov 26 11:07:06 localhost kernel: Key type blacklist registered
Nov 26 11:07:06 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Nov 26 11:07:06 localhost kernel: zbud: loaded
Nov 26 11:07:06 localhost kernel: integrity: Platform Keyring initialized
Nov 26 11:07:06 localhost kernel: integrity: Machine keyring initialized
Nov 26 11:07:06 localhost kernel: Freeing initrd memory: 85868K
Nov 26 11:07:06 localhost kernel: NET: Registered PF_ALG protocol family
Nov 26 11:07:06 localhost kernel: xor: automatically using best checksumming function   avx       
Nov 26 11:07:06 localhost kernel: Key type asymmetric registered
Nov 26 11:07:06 localhost kernel: Asymmetric key parser 'x509' registered
Nov 26 11:07:06 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Nov 26 11:07:06 localhost kernel: io scheduler mq-deadline registered
Nov 26 11:07:06 localhost kernel: io scheduler kyber registered
Nov 26 11:07:06 localhost kernel: io scheduler bfq registered
Nov 26 11:07:06 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Nov 26 11:07:06 localhost kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24
Nov 26 11:07:06 localhost kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24
Nov 26 11:07:06 localhost kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25
Nov 26 11:07:06 localhost kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25
Nov 26 11:07:06 localhost kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26
Nov 26 11:07:06 localhost kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26
Nov 26 11:07:06 localhost kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27
Nov 26 11:07:06 localhost kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27
Nov 26 11:07:06 localhost kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28
Nov 26 11:07:06 localhost kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28
Nov 26 11:07:06 localhost kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29
Nov 26 11:07:06 localhost kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29
Nov 26 11:07:06 localhost kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30
Nov 26 11:07:06 localhost kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30
Nov 26 11:07:06 localhost kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31
Nov 26 11:07:06 localhost kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31
Nov 26 11:07:06 localhost kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23
Nov 26 11:07:06 localhost kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32
Nov 26 11:07:06 localhost kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32
Nov 26 11:07:06 localhost kernel: pcieport 0000:00:03.1: PME: Signaling with IRQ 33
Nov 26 11:07:06 localhost kernel: pcieport 0000:00:03.1: AER: enabled with IRQ 33
Nov 26 11:07:06 localhost kernel: pcieport 0000:00:03.2: PME: Signaling with IRQ 34
Nov 26 11:07:06 localhost kernel: pcieport 0000:00:03.2: AER: enabled with IRQ 34
Nov 26 11:07:06 localhost kernel: pcieport 0000:00:03.3: PME: Signaling with IRQ 35
Nov 26 11:07:06 localhost kernel: pcieport 0000:00:03.3: AER: enabled with IRQ 35
Nov 26 11:07:06 localhost kernel: pcieport 0000:00:03.4: PME: Signaling with IRQ 36
Nov 26 11:07:06 localhost kernel: pcieport 0000:00:03.4: AER: enabled with IRQ 36
Nov 26 11:07:06 localhost kernel: pcieport 0000:00:03.5: PME: Signaling with IRQ 37
Nov 26 11:07:06 localhost kernel: pcieport 0000:00:03.5: AER: enabled with IRQ 37
Nov 26 11:07:06 localhost kernel: pcieport 0000:00:03.6: PME: Signaling with IRQ 38
Nov 26 11:07:06 localhost kernel: pcieport 0000:00:03.6: AER: enabled with IRQ 38
Nov 26 11:07:06 localhost kernel: pcieport 0000:00:03.7: PME: Signaling with IRQ 39
Nov 26 11:07:06 localhost kernel: pcieport 0000:00:03.7: AER: enabled with IRQ 39
Nov 26 11:07:06 localhost kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20
Nov 26 11:07:06 localhost kernel: pcieport 0000:00:04.0: PME: Signaling with IRQ 40
Nov 26 11:07:06 localhost kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 40
Nov 26 11:07:06 localhost kernel: shpchp 0000:01:00.0: HPC vendor_id 1b36 device_id e ss_vid 0 ss_did 0
Nov 26 11:07:06 localhost kernel: shpchp 0000:01:00.0: pci_hp_register failed with error -16
Nov 26 11:07:06 localhost kernel: shpchp 0000:01:00.0: Slot initialization failed
Nov 26 11:07:06 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Nov 26 11:07:06 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Nov 26 11:07:06 localhost kernel: ACPI: button: Power Button [PWRF]
Nov 26 11:07:06 localhost kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21
Nov 26 11:07:06 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Nov 26 11:07:06 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Nov 26 11:07:06 localhost kernel: Non-volatile memory driver v1.3
Nov 26 11:07:06 localhost kernel: rdac: device handler registered
Nov 26 11:07:06 localhost kernel: hp_sw: device handler registered
Nov 26 11:07:06 localhost kernel: emc: device handler registered
Nov 26 11:07:06 localhost kernel: alua: device handler registered
Nov 26 11:07:06 localhost kernel: uhci_hcd 0000:02:01.0: UHCI Host Controller
Nov 26 11:07:06 localhost kernel: uhci_hcd 0000:02:01.0: new USB bus registered, assigned bus number 1
Nov 26 11:07:06 localhost kernel: uhci_hcd 0000:02:01.0: detected 2 ports
Nov 26 11:07:06 localhost kernel: uhci_hcd 0000:02:01.0: irq 22, io port 0x0000c000
Nov 26 11:07:06 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Nov 26 11:07:06 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 26 11:07:06 localhost kernel: usb usb1: Product: UHCI Host Controller
Nov 26 11:07:06 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-642.el9.x86_64 uhci_hcd
Nov 26 11:07:06 localhost kernel: usb usb1: SerialNumber: 0000:02:01.0
Nov 26 11:07:06 localhost kernel: hub 1-0:1.0: USB hub found
Nov 26 11:07:06 localhost kernel: hub 1-0:1.0: 2 ports detected
Nov 26 11:07:06 localhost kernel: usbcore: registered new interface driver usbserial_generic
Nov 26 11:07:06 localhost kernel: usbserial: USB Serial support registered for generic
Nov 26 11:07:06 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Nov 26 11:07:06 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Nov 26 11:07:06 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Nov 26 11:07:06 localhost kernel: mousedev: PS/2 mouse device common for all mice
Nov 26 11:07:06 localhost kernel: rtc_cmos 00:03: RTC can wake from S4
Nov 26 11:07:06 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Nov 26 11:07:06 localhost kernel: rtc_cmos 00:03: registered as rtc0
Nov 26 11:07:06 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Nov 26 11:07:06 localhost kernel: rtc_cmos 00:03: setting system clock to 2025-11-26T11:07:06 UTC (1764155226)
Nov 26 11:07:06 localhost kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram
Nov 26 11:07:06 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Nov 26 11:07:06 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Nov 26 11:07:06 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Nov 26 11:07:06 localhost kernel: usbcore: registered new interface driver usbhid
Nov 26 11:07:06 localhost kernel: usbhid: USB HID core driver
Nov 26 11:07:06 localhost kernel: drop_monitor: Initializing network drop monitor service
Nov 26 11:07:06 localhost kernel: Initializing XFRM netlink socket
Nov 26 11:07:06 localhost kernel: NET: Registered PF_INET6 protocol family
Nov 26 11:07:06 localhost kernel: Segment Routing with IPv6
Nov 26 11:07:06 localhost kernel: NET: Registered PF_PACKET protocol family
Nov 26 11:07:06 localhost kernel: mpls_gso: MPLS GSO support
Nov 26 11:07:06 localhost kernel: IPI shorthand broadcast: enabled
Nov 26 11:07:06 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Nov 26 11:07:06 localhost kernel: AES CTR mode by8 optimization enabled
Nov 26 11:07:06 localhost kernel: sched_clock: Marking stable (1085001164, 143980718)->(1331539632, -102557750)
Nov 26 11:07:06 localhost kernel: registered taskstats version 1
Nov 26 11:07:06 localhost kernel: Loading compiled-in X.509 certificates
Nov 26 11:07:06 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Nov 26 11:07:06 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Nov 26 11:07:06 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Nov 26 11:07:06 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Nov 26 11:07:06 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Nov 26 11:07:06 localhost kernel: Demotion targets for Node 0: null
Nov 26 11:07:06 localhost kernel: page_owner is disabled
Nov 26 11:07:06 localhost kernel: Key type .fscrypt registered
Nov 26 11:07:06 localhost kernel: Key type fscrypt-provisioning registered
Nov 26 11:07:06 localhost kernel: Key type big_key registered
Nov 26 11:07:06 localhost kernel: Key type encrypted registered
Nov 26 11:07:06 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Nov 26 11:07:06 localhost kernel: Loading compiled-in module X.509 certificates
Nov 26 11:07:06 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Nov 26 11:07:06 localhost kernel: ima: Allocated hash algorithm: sha256
Nov 26 11:07:06 localhost kernel: ima: No architecture policies found
Nov 26 11:07:06 localhost kernel: evm: Initialising EVM extended attributes:
Nov 26 11:07:06 localhost kernel: evm: security.selinux
Nov 26 11:07:06 localhost kernel: evm: security.SMACK64 (disabled)
Nov 26 11:07:06 localhost kernel: evm: security.SMACK64EXEC (disabled)
Nov 26 11:07:06 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Nov 26 11:07:06 localhost kernel: evm: security.SMACK64MMAP (disabled)
Nov 26 11:07:06 localhost kernel: evm: security.apparmor (disabled)
Nov 26 11:07:06 localhost kernel: evm: security.ima
Nov 26 11:07:06 localhost kernel: evm: security.capability
Nov 26 11:07:06 localhost kernel: evm: HMAC attrs: 0x1
Nov 26 11:07:06 localhost kernel: Running certificate verification RSA selftest
Nov 26 11:07:06 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Nov 26 11:07:06 localhost kernel: Running certificate verification ECDSA selftest
Nov 26 11:07:06 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Nov 26 11:07:06 localhost kernel: clk: Disabling unused clocks
Nov 26 11:07:06 localhost kernel: Freeing unused decrypted memory: 2028K
Nov 26 11:07:06 localhost kernel: Freeing unused kernel image (initmem) memory: 4192K
Nov 26 11:07:06 localhost kernel: Write protecting the kernel read-only data: 30720k
Nov 26 11:07:06 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 436K
Nov 26 11:07:06 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Nov 26 11:07:06 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Nov 26 11:07:06 localhost kernel: Run /init as init process
Nov 26 11:07:06 localhost kernel:   with arguments:
Nov 26 11:07:06 localhost kernel:     /init
Nov 26 11:07:06 localhost kernel:   with environment:
Nov 26 11:07:06 localhost kernel:     HOME=/
Nov 26 11:07:06 localhost kernel:     TERM=linux
Nov 26 11:07:06 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64
Nov 26 11:07:06 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 26 11:07:06 localhost systemd[1]: Detected virtualization kvm.
Nov 26 11:07:06 localhost systemd[1]: Detected architecture x86-64.
Nov 26 11:07:06 localhost systemd[1]: Running in initrd.
Nov 26 11:07:06 localhost systemd[1]: No hostname configured, using default hostname.
Nov 26 11:07:06 localhost systemd[1]: Hostname set to <localhost>.
Nov 26 11:07:06 localhost systemd[1]: Initializing machine ID from VM UUID.
Nov 26 11:07:06 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Nov 26 11:07:06 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Nov 26 11:07:06 localhost systemd[1]: Reached target Local Encrypted Volumes.
Nov 26 11:07:06 localhost systemd[1]: Reached target Initrd /usr File System.
Nov 26 11:07:06 localhost systemd[1]: Reached target Local File Systems.
Nov 26 11:07:06 localhost systemd[1]: Reached target Path Units.
Nov 26 11:07:06 localhost systemd[1]: Reached target Slice Units.
Nov 26 11:07:06 localhost systemd[1]: Reached target Swaps.
Nov 26 11:07:06 localhost systemd[1]: Reached target Timer Units.
Nov 26 11:07:06 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 26 11:07:06 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Nov 26 11:07:06 localhost systemd[1]: Listening on Journal Socket.
Nov 26 11:07:06 localhost systemd[1]: Listening on udev Control Socket.
Nov 26 11:07:06 localhost systemd[1]: Listening on udev Kernel Socket.
Nov 26 11:07:06 localhost systemd[1]: Reached target Socket Units.
Nov 26 11:07:06 localhost systemd[1]: Starting Create List of Static Device Nodes...
Nov 26 11:07:06 localhost systemd[1]: Starting Journal Service...
Nov 26 11:07:06 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 26 11:07:06 localhost systemd[1]: Starting Apply Kernel Variables...
Nov 26 11:07:06 localhost systemd[1]: Starting Create System Users...
Nov 26 11:07:06 localhost systemd[1]: Starting Setup Virtual Console...
Nov 26 11:07:06 localhost systemd[1]: Finished Create List of Static Device Nodes.
Nov 26 11:07:06 localhost systemd[1]: Finished Apply Kernel Variables.
Nov 26 11:07:06 localhost systemd[1]: Finished Create System Users.
Nov 26 11:07:06 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Nov 26 11:07:06 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Nov 26 11:07:06 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Nov 26 11:07:06 localhost kernel: usb 1-1: Manufacturer: QEMU
Nov 26 11:07:06 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:02.0:00.0:01.0-1
Nov 26 11:07:06 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.0/0000:01:00.0/0000:02:01.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Nov 26 11:07:06 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:01.0-1/input0
Nov 26 11:07:06 localhost systemd-journald[279]: Journal started
Nov 26 11:07:06 localhost systemd-journald[279]: Runtime Journal (/run/log/journal/99bbe82211064372ba1dab3f2104eabd) is 8.0M, max 153.6M, 145.6M free.
Nov 26 11:07:06 localhost systemd-sysusers[282]: Creating group 'users' with GID 100.
Nov 26 11:07:06 localhost systemd-sysusers[282]: Creating group 'dbus' with GID 81.
Nov 26 11:07:06 localhost systemd-sysusers[282]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Nov 26 11:07:06 localhost systemd[1]: Started Journal Service.
Nov 26 11:07:06 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 26 11:07:06 localhost systemd[1]: Starting Create Volatile Files and Directories...
Nov 26 11:07:07 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 26 11:07:07 localhost systemd[1]: Finished Create Volatile Files and Directories.
Nov 26 11:07:07 localhost systemd[1]: Finished Setup Virtual Console.
Nov 26 11:07:07 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Nov 26 11:07:07 localhost systemd[1]: Starting dracut cmdline hook...
Nov 26 11:07:07 localhost dracut-cmdline[298]: dracut-9 dracut-057-102.git20250818.el9
Nov 26 11:07:07 localhost dracut-cmdline[298]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 26 11:07:07 localhost systemd[1]: Finished dracut cmdline hook.
Nov 26 11:07:07 localhost systemd[1]: Starting dracut pre-udev hook...
Nov 26 11:07:07 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Nov 26 11:07:07 localhost kernel: device-mapper: uevent: version 1.0.3
Nov 26 11:07:07 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Nov 26 11:07:07 localhost kernel: RPC: Registered named UNIX socket transport module.
Nov 26 11:07:07 localhost kernel: RPC: Registered udp transport module.
Nov 26 11:07:07 localhost kernel: RPC: Registered tcp transport module.
Nov 26 11:07:07 localhost kernel: RPC: Registered tcp-with-tls transport module.
Nov 26 11:07:07 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Nov 26 11:07:07 localhost rpc.statd[413]: Version 2.5.4 starting
Nov 26 11:07:07 localhost rpc.statd[413]: Initializing NSM state
Nov 26 11:07:07 localhost rpc.idmapd[418]: Setting log level to 0
Nov 26 11:07:07 localhost systemd[1]: Finished dracut pre-udev hook.
Nov 26 11:07:07 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 26 11:07:07 localhost systemd-udevd[431]: Using default interface naming scheme 'rhel-9.0'.
Nov 26 11:07:07 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 26 11:07:07 localhost systemd[1]: Starting dracut pre-trigger hook...
Nov 26 11:07:07 localhost systemd[1]: Finished dracut pre-trigger hook.
Nov 26 11:07:07 localhost systemd[1]: Starting Coldplug All udev Devices...
Nov 26 11:07:07 localhost systemd[1]: Created slice Slice /system/modprobe.
Nov 26 11:07:07 localhost systemd[1]: Starting Load Kernel Module configfs...
Nov 26 11:07:07 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 26 11:07:07 localhost systemd[1]: Finished Load Kernel Module configfs.
Nov 26 11:07:07 localhost systemd[1]: Finished Coldplug All udev Devices.
Nov 26 11:07:07 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 26 11:07:07 localhost systemd[1]: Reached target Network.
Nov 26 11:07:07 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 26 11:07:07 localhost systemd[1]: Starting dracut initqueue hook...
Nov 26 11:07:07 localhost kernel: virtio_blk virtio2: 4/0/0 default/read/poll queues
Nov 26 11:07:07 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Nov 26 11:07:07 localhost kernel:  vda: vda1
Nov 26 11:07:07 localhost kernel: libata version 3.00 loaded.
Nov 26 11:07:07 localhost systemd-udevd[450]: Network interface NamePolicy= disabled on kernel command line.
Nov 26 11:07:07 localhost kernel: ahci 0000:00:1f.2: version 3.0
Nov 26 11:07:07 localhost kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16
Nov 26 11:07:07 localhost systemd[1]: Found device /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253.
Nov 26 11:07:07 localhost systemd[1]: Reached target Initrd Root Device.
Nov 26 11:07:07 localhost kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode
Nov 26 11:07:07 localhost kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f)
Nov 26 11:07:07 localhost kernel: ahci 0000:00:1f.2: flags: 64bit ncq only 
Nov 26 11:07:07 localhost kernel: scsi host0: ahci
Nov 26 11:07:07 localhost kernel: scsi host1: ahci
Nov 26 11:07:07 localhost kernel: scsi host2: ahci
Nov 26 11:07:07 localhost kernel: scsi host3: ahci
Nov 26 11:07:07 localhost kernel: scsi host4: ahci
Nov 26 11:07:07 localhost kernel: scsi host5: ahci
Nov 26 11:07:07 localhost kernel: ata1: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22100 irq 49 lpm-pol 0
Nov 26 11:07:07 localhost kernel: ata2: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22180 irq 49 lpm-pol 0
Nov 26 11:07:07 localhost kernel: ata3: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22200 irq 49 lpm-pol 0
Nov 26 11:07:07 localhost kernel: ata4: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22280 irq 49 lpm-pol 0
Nov 26 11:07:07 localhost kernel: ata5: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22300 irq 49 lpm-pol 0
Nov 26 11:07:07 localhost kernel: ata6: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22380 irq 49 lpm-pol 0
Nov 26 11:07:07 localhost systemd[1]: Mounting Kernel Configuration File System...
Nov 26 11:07:07 localhost systemd[1]: Mounted Kernel Configuration File System.
Nov 26 11:07:07 localhost systemd[1]: Reached target System Initialization.
Nov 26 11:07:07 localhost systemd[1]: Reached target Basic System.
Nov 26 11:07:07 localhost kernel: ata6: SATA link down (SStatus 0 SControl 300)
Nov 26 11:07:07 localhost kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
Nov 26 11:07:07 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Nov 26 11:07:07 localhost kernel: ata1.00: applying bridge limits
Nov 26 11:07:07 localhost kernel: ata1.00: configured for UDMA/100
Nov 26 11:07:07 localhost kernel: ata4: SATA link down (SStatus 0 SControl 300)
Nov 26 11:07:07 localhost kernel: ata2: SATA link down (SStatus 0 SControl 300)
Nov 26 11:07:07 localhost kernel: ata5: SATA link down (SStatus 0 SControl 300)
Nov 26 11:07:07 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Nov 26 11:07:07 localhost kernel: ata3: SATA link down (SStatus 0 SControl 300)
Nov 26 11:07:07 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Nov 26 11:07:07 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Nov 26 11:07:07 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Nov 26 11:07:07 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Nov 26 11:07:08 localhost systemd[1]: Finished dracut initqueue hook.
Nov 26 11:07:08 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Nov 26 11:07:08 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Nov 26 11:07:08 localhost systemd[1]: Reached target Remote File Systems.
Nov 26 11:07:08 localhost systemd[1]: Starting dracut pre-mount hook...
Nov 26 11:07:08 localhost systemd[1]: Finished dracut pre-mount hook.
Nov 26 11:07:08 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253...
Nov 26 11:07:08 localhost systemd-fsck[526]: /usr/sbin/fsck.xfs: XFS file system.
Nov 26 11:07:08 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253.
Nov 26 11:07:08 localhost systemd[1]: Mounting /sysroot...
Nov 26 11:07:08 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Nov 26 11:07:08 localhost kernel: XFS (vda1): Mounting V5 Filesystem b277050f-8ace-464d-abb6-4c46d4c45253
Nov 26 11:07:08 localhost kernel: XFS (vda1): Ending clean mount
Nov 26 11:07:08 localhost systemd[1]: Mounted /sysroot.
Nov 26 11:07:08 localhost systemd[1]: Reached target Initrd Root File System.
Nov 26 11:07:08 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Nov 26 11:07:08 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Nov 26 11:07:08 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Nov 26 11:07:08 localhost systemd[1]: Reached target Initrd File Systems.
Nov 26 11:07:08 localhost systemd[1]: Reached target Initrd Default Target.
Nov 26 11:07:08 localhost systemd[1]: Starting dracut mount hook...
Nov 26 11:07:08 localhost systemd[1]: Finished dracut mount hook.
Nov 26 11:07:08 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Nov 26 11:07:08 localhost rpc.idmapd[418]: exiting on signal 15
Nov 26 11:07:08 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Nov 26 11:07:08 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Nov 26 11:07:08 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Nov 26 11:07:08 localhost systemd[1]: Stopped target Network.
Nov 26 11:07:08 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Nov 26 11:07:08 localhost systemd[1]: Stopped target Timer Units.
Nov 26 11:07:08 localhost systemd[1]: dbus.socket: Deactivated successfully.
Nov 26 11:07:08 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Nov 26 11:07:08 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Nov 26 11:07:08 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Nov 26 11:07:08 localhost systemd[1]: Stopped target Initrd Default Target.
Nov 26 11:07:08 localhost systemd[1]: Stopped target Basic System.
Nov 26 11:07:08 localhost systemd[1]: Stopped target Initrd Root Device.
Nov 26 11:07:08 localhost systemd[1]: Stopped target Initrd /usr File System.
Nov 26 11:07:08 localhost systemd[1]: Stopped target Path Units.
Nov 26 11:07:08 localhost systemd[1]: Stopped target Remote File Systems.
Nov 26 11:07:08 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Nov 26 11:07:08 localhost systemd[1]: Stopped target Slice Units.
Nov 26 11:07:08 localhost systemd[1]: Stopped target Socket Units.
Nov 26 11:07:08 localhost systemd[1]: Stopped target System Initialization.
Nov 26 11:07:08 localhost systemd[1]: Stopped target Local File Systems.
Nov 26 11:07:08 localhost systemd[1]: Stopped target Swaps.
Nov 26 11:07:08 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Nov 26 11:07:08 localhost systemd[1]: Stopped dracut mount hook.
Nov 26 11:07:08 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Nov 26 11:07:08 localhost systemd[1]: Stopped dracut pre-mount hook.
Nov 26 11:07:08 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Nov 26 11:07:08 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Nov 26 11:07:08 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Nov 26 11:07:08 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Nov 26 11:07:08 localhost systemd[1]: Stopped dracut initqueue hook.
Nov 26 11:07:08 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 26 11:07:08 localhost systemd[1]: Stopped Apply Kernel Variables.
Nov 26 11:07:08 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Nov 26 11:07:08 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Nov 26 11:07:08 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Nov 26 11:07:08 localhost systemd[1]: Stopped Coldplug All udev Devices.
Nov 26 11:07:08 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Nov 26 11:07:08 localhost systemd[1]: Stopped dracut pre-trigger hook.
Nov 26 11:07:08 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Nov 26 11:07:08 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Nov 26 11:07:08 localhost systemd[1]: Stopped Setup Virtual Console.
Nov 26 11:07:08 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Nov 26 11:07:08 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Nov 26 11:07:08 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Nov 26 11:07:08 localhost systemd[1]: Closed udev Control Socket.
Nov 26 11:07:08 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Nov 26 11:07:08 localhost systemd[1]: Closed udev Kernel Socket.
Nov 26 11:07:08 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Nov 26 11:07:08 localhost systemd[1]: Stopped dracut pre-udev hook.
Nov 26 11:07:08 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Nov 26 11:07:08 localhost systemd[1]: Stopped dracut cmdline hook.
Nov 26 11:07:08 localhost systemd[1]: Starting Cleanup udev Database...
Nov 26 11:07:08 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Nov 26 11:07:08 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Nov 26 11:07:08 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Nov 26 11:07:08 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Nov 26 11:07:08 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Nov 26 11:07:08 localhost systemd[1]: Stopped Create System Users.
Nov 26 11:07:08 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Nov 26 11:07:08 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Nov 26 11:07:08 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Nov 26 11:07:08 localhost systemd[1]: Finished Cleanup udev Database.
Nov 26 11:07:08 localhost systemd[1]: Reached target Switch Root.
Nov 26 11:07:08 localhost systemd[1]: Starting Switch Root...
Nov 26 11:07:08 localhost systemd[1]: Switching root.
Nov 26 11:07:08 localhost systemd-journald[279]: Received SIGTERM from PID 1 (systemd).
Nov 26 11:07:08 localhost systemd-journald[279]: Journal stopped
Nov 26 11:07:09 localhost kernel: audit: type=1404 audit(1764155228.778:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Nov 26 11:07:09 localhost kernel: SELinux:  policy capability network_peer_controls=1
Nov 26 11:07:09 localhost kernel: SELinux:  policy capability open_perms=1
Nov 26 11:07:09 localhost kernel: SELinux:  policy capability extended_socket_class=1
Nov 26 11:07:09 localhost kernel: SELinux:  policy capability always_check_network=0
Nov 26 11:07:09 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 26 11:07:09 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 26 11:07:09 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 26 11:07:09 localhost kernel: audit: type=1403 audit(1764155228.886:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Nov 26 11:07:09 localhost systemd[1]: Successfully loaded SELinux policy in 110.997ms.
Nov 26 11:07:09 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.828ms.
Nov 26 11:07:09 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 26 11:07:09 localhost systemd[1]: Detected virtualization kvm.
Nov 26 11:07:09 localhost systemd[1]: Detected architecture x86-64.
Nov 26 11:07:09 localhost systemd-rc-local-generator[606]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:07:09 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Nov 26 11:07:09 localhost systemd[1]: Stopped Switch Root.
Nov 26 11:07:09 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Nov 26 11:07:09 localhost systemd[1]: Created slice Slice /system/getty.
Nov 26 11:07:09 localhost systemd[1]: Created slice Slice /system/serial-getty.
Nov 26 11:07:09 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Nov 26 11:07:09 localhost systemd[1]: Created slice User and Session Slice.
Nov 26 11:07:09 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Nov 26 11:07:09 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Nov 26 11:07:09 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Nov 26 11:07:09 localhost systemd[1]: Reached target Local Encrypted Volumes.
Nov 26 11:07:09 localhost systemd[1]: Stopped target Switch Root.
Nov 26 11:07:09 localhost systemd[1]: Stopped target Initrd File Systems.
Nov 26 11:07:09 localhost systemd[1]: Stopped target Initrd Root File System.
Nov 26 11:07:09 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Nov 26 11:07:09 localhost systemd[1]: Reached target Path Units.
Nov 26 11:07:09 localhost systemd[1]: Reached target rpc_pipefs.target.
Nov 26 11:07:09 localhost systemd[1]: Reached target Slice Units.
Nov 26 11:07:09 localhost systemd[1]: Reached target Swaps.
Nov 26 11:07:09 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Nov 26 11:07:09 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Nov 26 11:07:09 localhost systemd[1]: Reached target RPC Port Mapper.
Nov 26 11:07:09 localhost systemd[1]: Listening on Process Core Dump Socket.
Nov 26 11:07:09 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Nov 26 11:07:09 localhost systemd[1]: Listening on udev Control Socket.
Nov 26 11:07:09 localhost systemd[1]: Listening on udev Kernel Socket.
Nov 26 11:07:09 localhost systemd[1]: Mounting Huge Pages File System...
Nov 26 11:07:09 localhost systemd[1]: Mounting POSIX Message Queue File System...
Nov 26 11:07:09 localhost systemd[1]: Mounting Kernel Debug File System...
Nov 26 11:07:09 localhost systemd[1]: Mounting Kernel Trace File System...
Nov 26 11:07:09 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 26 11:07:09 localhost systemd[1]: Starting Create List of Static Device Nodes...
Nov 26 11:07:09 localhost systemd[1]: Starting Load Kernel Module configfs...
Nov 26 11:07:09 localhost systemd[1]: Starting Load Kernel Module drm...
Nov 26 11:07:09 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Nov 26 11:07:09 localhost systemd[1]: Starting Load Kernel Module fuse...
Nov 26 11:07:09 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Nov 26 11:07:09 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Nov 26 11:07:09 localhost systemd[1]: Stopped File System Check on Root Device.
Nov 26 11:07:09 localhost systemd[1]: Stopped Journal Service.
Nov 26 11:07:09 localhost systemd[1]: Starting Journal Service...
Nov 26 11:07:09 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 26 11:07:09 localhost kernel: fuse: init (API version 7.37)
Nov 26 11:07:09 localhost systemd[1]: Starting Generate network units from Kernel command line...
Nov 26 11:07:09 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 26 11:07:09 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Nov 26 11:07:09 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Nov 26 11:07:09 localhost systemd[1]: Starting Apply Kernel Variables...
Nov 26 11:07:09 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Nov 26 11:07:09 localhost systemd[1]: Starting Coldplug All udev Devices...
Nov 26 11:07:09 localhost systemd-journald[648]: Journal started
Nov 26 11:07:09 localhost systemd-journald[648]: Runtime Journal (/run/log/journal/1f988c78c563e12389ab342aced42dbb) is 8.0M, max 153.6M, 145.6M free.
Nov 26 11:07:09 localhost systemd[1]: Queued start job for default target Multi-User System.
Nov 26 11:07:09 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Nov 26 11:07:09 localhost systemd[1]: Mounted Huge Pages File System.
Nov 26 11:07:09 localhost systemd[1]: Started Journal Service.
Nov 26 11:07:09 localhost systemd[1]: Mounted POSIX Message Queue File System.
Nov 26 11:07:09 localhost systemd[1]: Mounted Kernel Debug File System.
Nov 26 11:07:09 localhost systemd[1]: Mounted Kernel Trace File System.
Nov 26 11:07:09 localhost systemd[1]: Finished Create List of Static Device Nodes.
Nov 26 11:07:09 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 26 11:07:09 localhost systemd[1]: Finished Load Kernel Module configfs.
Nov 26 11:07:09 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Nov 26 11:07:09 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Nov 26 11:07:09 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Nov 26 11:07:09 localhost systemd[1]: Finished Load Kernel Module fuse.
Nov 26 11:07:09 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Nov 26 11:07:09 localhost systemd[1]: Finished Generate network units from Kernel command line.
Nov 26 11:07:09 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Nov 26 11:07:09 localhost systemd[1]: Finished Apply Kernel Variables.
Nov 26 11:07:09 localhost systemd[1]: Mounting FUSE Control File System...
Nov 26 11:07:09 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 26 11:07:09 localhost systemd[1]: Starting Rebuild Hardware Database...
Nov 26 11:07:09 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Nov 26 11:07:09 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Nov 26 11:07:09 localhost kernel: ACPI: bus type drm_connector registered
Nov 26 11:07:09 localhost systemd-journald[648]: Runtime Journal (/run/log/journal/1f988c78c563e12389ab342aced42dbb) is 8.0M, max 153.6M, 145.6M free.
Nov 26 11:07:09 localhost systemd-journald[648]: Received client request to flush runtime journal.
Nov 26 11:07:09 localhost systemd[1]: Starting Load/Save OS Random Seed...
Nov 26 11:07:09 localhost systemd[1]: Starting Create System Users...
Nov 26 11:07:09 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Nov 26 11:07:09 localhost systemd[1]: Finished Load Kernel Module drm.
Nov 26 11:07:09 localhost systemd[1]: Mounted FUSE Control File System.
Nov 26 11:07:09 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Nov 26 11:07:09 localhost systemd[1]: Finished Load/Save OS Random Seed.
Nov 26 11:07:09 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 26 11:07:09 localhost systemd[1]: Finished Create System Users.
Nov 26 11:07:09 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 26 11:07:09 localhost systemd[1]: Finished Coldplug All udev Devices.
Nov 26 11:07:09 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 26 11:07:09 localhost systemd[1]: Reached target Preparation for Local File Systems.
Nov 26 11:07:09 localhost systemd[1]: Reached target Local File Systems.
Nov 26 11:07:09 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Nov 26 11:07:09 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Nov 26 11:07:09 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Nov 26 11:07:09 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Nov 26 11:07:09 localhost systemd[1]: Starting Automatic Boot Loader Update...
Nov 26 11:07:09 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Nov 26 11:07:09 localhost systemd[1]: Starting Create Volatile Files and Directories...
Nov 26 11:07:09 localhost bootctl[664]: Couldn't find EFI system partition, skipping.
Nov 26 11:07:09 localhost systemd[1]: Finished Automatic Boot Loader Update.
Nov 26 11:07:09 localhost systemd[1]: Finished Create Volatile Files and Directories.
Nov 26 11:07:09 localhost systemd[1]: Starting Security Auditing Service...
Nov 26 11:07:09 localhost systemd[1]: Starting RPC Bind...
Nov 26 11:07:09 localhost systemd[1]: Starting Rebuild Journal Catalog...
Nov 26 11:07:09 localhost auditd[671]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Nov 26 11:07:09 localhost auditd[671]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Nov 26 11:07:09 localhost systemd[1]: Started RPC Bind.
Nov 26 11:07:09 localhost systemd[1]: Finished Rebuild Journal Catalog.
Nov 26 11:07:09 localhost augenrules[676]: /sbin/augenrules: No change
Nov 26 11:07:09 localhost augenrules[691]: No rules
Nov 26 11:07:09 localhost augenrules[691]: enabled 1
Nov 26 11:07:09 localhost augenrules[691]: failure 1
Nov 26 11:07:09 localhost augenrules[691]: pid 671
Nov 26 11:07:09 localhost augenrules[691]: rate_limit 0
Nov 26 11:07:09 localhost augenrules[691]: backlog_limit 8192
Nov 26 11:07:09 localhost augenrules[691]: lost 0
Nov 26 11:07:09 localhost augenrules[691]: backlog 0
Nov 26 11:07:09 localhost augenrules[691]: backlog_wait_time 60000
Nov 26 11:07:09 localhost augenrules[691]: backlog_wait_time_actual 0
Nov 26 11:07:09 localhost augenrules[691]: enabled 1
Nov 26 11:07:09 localhost augenrules[691]: failure 1
Nov 26 11:07:09 localhost augenrules[691]: pid 671
Nov 26 11:07:09 localhost augenrules[691]: rate_limit 0
Nov 26 11:07:09 localhost augenrules[691]: backlog_limit 8192
Nov 26 11:07:09 localhost augenrules[691]: lost 0
Nov 26 11:07:09 localhost augenrules[691]: backlog 3
Nov 26 11:07:09 localhost augenrules[691]: backlog_wait_time 60000
Nov 26 11:07:09 localhost augenrules[691]: backlog_wait_time_actual 0
Nov 26 11:07:09 localhost augenrules[691]: enabled 1
Nov 26 11:07:09 localhost augenrules[691]: failure 1
Nov 26 11:07:09 localhost augenrules[691]: pid 671
Nov 26 11:07:09 localhost augenrules[691]: rate_limit 0
Nov 26 11:07:09 localhost augenrules[691]: backlog_limit 8192
Nov 26 11:07:09 localhost augenrules[691]: lost 0
Nov 26 11:07:09 localhost augenrules[691]: backlog 1
Nov 26 11:07:09 localhost augenrules[691]: backlog_wait_time 60000
Nov 26 11:07:09 localhost augenrules[691]: backlog_wait_time_actual 0
Nov 26 11:07:09 localhost systemd[1]: Started Security Auditing Service.
Nov 26 11:07:09 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Nov 26 11:07:09 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Nov 26 11:07:09 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Nov 26 11:07:09 localhost systemd[1]: Finished Rebuild Hardware Database.
Nov 26 11:07:09 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 26 11:07:09 localhost systemd[1]: Starting Update is Completed...
Nov 26 11:07:09 localhost systemd[1]: Finished Update is Completed.
Nov 26 11:07:09 localhost systemd-udevd[699]: Using default interface naming scheme 'rhel-9.0'.
Nov 26 11:07:09 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 26 11:07:09 localhost systemd[1]: Reached target System Initialization.
Nov 26 11:07:09 localhost systemd[1]: Started dnf makecache --timer.
Nov 26 11:07:09 localhost systemd[1]: Started Daily rotation of log files.
Nov 26 11:07:09 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Nov 26 11:07:09 localhost systemd[1]: Reached target Timer Units.
Nov 26 11:07:09 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 26 11:07:09 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Nov 26 11:07:09 localhost systemd[1]: Reached target Socket Units.
Nov 26 11:07:09 localhost systemd[1]: Starting D-Bus System Message Bus...
Nov 26 11:07:09 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 26 11:07:09 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Nov 26 11:07:09 localhost systemd[1]: Starting Load Kernel Module configfs...
Nov 26 11:07:09 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 26 11:07:09 localhost systemd[1]: Finished Load Kernel Module configfs.
Nov 26 11:07:09 localhost systemd[1]: Started D-Bus System Message Bus.
Nov 26 11:07:09 localhost systemd[1]: Reached target Basic System.
Nov 26 11:07:09 localhost dbus-broker-lau[724]: Ready
Nov 26 11:07:09 localhost systemd-udevd[715]: Network interface NamePolicy= disabled on kernel command line.
Nov 26 11:07:09 localhost systemd[1]: Starting NTP client/server...
Nov 26 11:07:09 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Nov 26 11:07:09 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Nov 26 11:07:09 localhost systemd[1]: Starting IPv4 firewall with iptables...
Nov 26 11:07:09 localhost systemd[1]: Started irqbalance daemon.
Nov 26 11:07:09 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Nov 26 11:07:09 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 26 11:07:09 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 26 11:07:09 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 26 11:07:09 localhost systemd[1]: Reached target sshd-keygen.target.
Nov 26 11:07:09 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Nov 26 11:07:09 localhost systemd[1]: Reached target User and Group Name Lookups.
Nov 26 11:07:09 localhost chronyd[746]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 26 11:07:09 localhost systemd[1]: Starting User Login Management...
Nov 26 11:07:09 localhost chronyd[746]: Loaded 0 symmetric keys
Nov 26 11:07:09 localhost chronyd[746]: Using right/UTC timezone to obtain leap second data
Nov 26 11:07:09 localhost chronyd[746]: Loaded seccomp filter (level 2)
Nov 26 11:07:09 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Nov 26 11:07:09 localhost systemd[1]: Started NTP client/server.
Nov 26 11:07:09 localhost systemd-logind[744]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 26 11:07:09 localhost systemd-logind[744]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 26 11:07:09 localhost systemd-logind[744]: New seat seat0.
Nov 26 11:07:09 localhost systemd[1]: Started User Login Management.
Nov 26 11:07:09 localhost kernel: lpc_ich 0000:00:1f.0: I/O space for GPIO uninitialized
Nov 26 11:07:09 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Nov 26 11:07:09 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Nov 26 11:07:09 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:01.0
Nov 26 11:07:09 localhost kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console
Nov 26 11:07:09 localhost kernel: Console: switching to colour dummy device 80x25
Nov 26 11:07:09 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Nov 26 11:07:09 localhost kernel: [drm] features: -context_init
Nov 26 11:07:09 localhost kernel: [drm] number of scanouts: 1
Nov 26 11:07:09 localhost kernel: [drm] number of cap sets: 0
Nov 26 11:07:09 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0
Nov 26 11:07:09 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Nov 26 11:07:09 localhost kernel: Console: switching to colour frame buffer device 160x50
Nov 26 11:07:09 localhost kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Nov 26 11:07:09 localhost kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt
Nov 26 11:07:09 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Nov 26 11:07:09 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Nov 26 11:07:09 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Nov 26 11:07:10 localhost kernel: iTCO_vendor_support: vendor-support=0
Nov 26 11:07:10 localhost kernel: iTCO_wdt iTCO_wdt.1.auto: Found a ICH9 TCO device (Version=2, TCOBASE=0x0660)
Nov 26 11:07:10 localhost kernel: iTCO_wdt iTCO_wdt.1.auto: initialized. heartbeat=30 sec (nowayout=0)
Nov 26 11:07:10 localhost iptables.init[738]: iptables: Applying firewall rules: [  OK  ]
Nov 26 11:07:10 localhost systemd[1]: Finished IPv4 firewall with iptables.
Nov 26 11:07:10 localhost kernel: kvm_amd: TSC scaling supported
Nov 26 11:07:10 localhost kernel: kvm_amd: Nested Virtualization enabled
Nov 26 11:07:10 localhost kernel: kvm_amd: Nested Paging enabled
Nov 26 11:07:10 localhost kernel: kvm_amd: LBR virtualization supported
Nov 26 11:07:10 localhost kernel: kvm_amd: Virtual VMLOAD VMSAVE supported
Nov 26 11:07:10 localhost kernel: kvm_amd: Virtual GIF supported
Nov 26 11:07:10 localhost cloud-init[792]: Cloud-init v. 24.4-7.el9 running 'init-local' at Wed, 26 Nov 2025 11:07:10 +0000. Up 4.84 seconds.
Nov 26 11:07:10 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Nov 26 11:07:10 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Nov 26 11:07:10 localhost systemd[1]: run-cloud\x2dinit-tmp-tmpghcwtdo6.mount: Deactivated successfully.
Nov 26 11:07:10 localhost systemd[1]: Starting Hostname Service...
Nov 26 11:07:10 localhost systemd[1]: Started Hostname Service.
Nov 26 11:07:10 np0005536539 systemd-hostnamed[806]: Hostname set to <np0005536539> (static)
Nov 26 11:07:10 np0005536539 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Nov 26 11:07:10 np0005536539 systemd[1]: Reached target Preparation for Network.
Nov 26 11:07:10 np0005536539 systemd[1]: Starting Network Manager...
Nov 26 11:07:10 np0005536539 NetworkManager[810]: <info>  [1764155230.6833] NetworkManager (version 1.54.1-1.el9) is starting... (boot:85c12273-0edc-4b34-861a-c0940ef400f5)
Nov 26 11:07:10 np0005536539 NetworkManager[810]: <info>  [1764155230.6837] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /etc/NetworkManager/conf.d/99-cloud-init.conf
Nov 26 11:07:10 np0005536539 NetworkManager[810]: <info>  [1764155230.6929] manager[0x556e89b14080]: monitoring kernel firmware directory '/lib/firmware'.
Nov 26 11:07:10 np0005536539 NetworkManager[810]: <info>  [1764155230.6960] hostname: hostname: using hostnamed
Nov 26 11:07:10 np0005536539 NetworkManager[810]: <info>  [1764155230.6961] hostname: static hostname changed from (none) to "np0005536539"
Nov 26 11:07:10 np0005536539 NetworkManager[810]: <info>  [1764155230.6963] dns-mgr: init: dns=none,systemd-resolved rc-manager=unmanaged
Nov 26 11:07:10 np0005536539 NetworkManager[810]: <info>  [1764155230.7043] manager[0x556e89b14080]: rfkill: Wi-Fi hardware radio set enabled
Nov 26 11:07:10 np0005536539 NetworkManager[810]: <info>  [1764155230.7044] manager[0x556e89b14080]: rfkill: WWAN hardware radio set enabled
Nov 26 11:07:10 np0005536539 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Nov 26 11:07:10 np0005536539 NetworkManager[810]: <info>  [1764155230.7109] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 26 11:07:10 np0005536539 NetworkManager[810]: <info>  [1764155230.7109] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 26 11:07:10 np0005536539 NetworkManager[810]: <info>  [1764155230.7109] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 26 11:07:10 np0005536539 NetworkManager[810]: <info>  [1764155230.7110] manager: Networking is enabled by state file
Nov 26 11:07:10 np0005536539 NetworkManager[810]: <info>  [1764155230.7111] settings: Loaded settings plugin: keyfile (internal)
Nov 26 11:07:10 np0005536539 NetworkManager[810]: <info>  [1764155230.7127] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 26 11:07:10 np0005536539 NetworkManager[810]: <info>  [1764155230.7159] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 26 11:07:10 np0005536539 NetworkManager[810]: <info>  [1764155230.7175] dhcp: init: Using DHCP client 'internal'
Nov 26 11:07:10 np0005536539 NetworkManager[810]: <info>  [1764155230.7178] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 26 11:07:10 np0005536539 NetworkManager[810]: <info>  [1764155230.7192] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 26 11:07:10 np0005536539 NetworkManager[810]: <info>  [1764155230.7202] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 26 11:07:10 np0005536539 NetworkManager[810]: <info>  [1764155230.7209] device (lo): Activation: starting connection 'lo' (868fb90f-4437-4595-9529-b8bb5b9dbd08)
Nov 26 11:07:10 np0005536539 NetworkManager[810]: <info>  [1764155230.7217] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 26 11:07:10 np0005536539 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 26 11:07:10 np0005536539 NetworkManager[810]: <info>  [1764155230.7221] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 26 11:07:10 np0005536539 systemd[1]: Started Network Manager.
Nov 26 11:07:10 np0005536539 NetworkManager[810]: <info>  [1764155230.7260] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 26 11:07:10 np0005536539 NetworkManager[810]: <info>  [1764155230.7267] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 26 11:07:10 np0005536539 NetworkManager[810]: <info>  [1764155230.7269] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 26 11:07:10 np0005536539 NetworkManager[810]: <info>  [1764155230.7271] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 26 11:07:10 np0005536539 NetworkManager[810]: <info>  [1764155230.7275] device (eth0): carrier: link connected
Nov 26 11:07:10 np0005536539 NetworkManager[810]: <info>  [1764155230.7278] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 26 11:07:10 np0005536539 systemd[1]: Reached target Network.
Nov 26 11:07:10 np0005536539 NetworkManager[810]: <info>  [1764155230.7303] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 26 11:07:10 np0005536539 NetworkManager[810]: <info>  [1764155230.7309] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 26 11:07:10 np0005536539 NetworkManager[810]: <info>  [1764155230.7313] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 26 11:07:10 np0005536539 NetworkManager[810]: <info>  [1764155230.7314] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 26 11:07:10 np0005536539 systemd[1]: Starting Network Manager Wait Online...
Nov 26 11:07:10 np0005536539 NetworkManager[810]: <info>  [1764155230.7325] manager: NetworkManager state is now CONNECTING
Nov 26 11:07:10 np0005536539 NetworkManager[810]: <info>  [1764155230.7328] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 26 11:07:10 np0005536539 NetworkManager[810]: <info>  [1764155230.7334] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 26 11:07:10 np0005536539 NetworkManager[810]: <info>  [1764155230.7339] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 26 11:07:10 np0005536539 NetworkManager[810]: <info>  [1764155230.7344] policy: set 'System eth0' (eth0) as default for IPv6 routing and DNS
Nov 26 11:07:10 np0005536539 systemd[1]: Starting GSSAPI Proxy Daemon...
Nov 26 11:07:10 np0005536539 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 26 11:07:10 np0005536539 NetworkManager[810]: <info>  [1764155230.7398] dhcp4 (eth0): state changed new lease, address=192.168.26.91
Nov 26 11:07:10 np0005536539 NetworkManager[810]: <info>  [1764155230.7406] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 26 11:07:10 np0005536539 NetworkManager[810]: <info>  [1764155230.7418] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 26 11:07:10 np0005536539 NetworkManager[810]: <info>  [1764155230.7423] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 26 11:07:10 np0005536539 NetworkManager[810]: <info>  [1764155230.7430] device (lo): Activation: successful, device activated.
Nov 26 11:07:10 np0005536539 systemd[1]: Started GSSAPI Proxy Daemon.
Nov 26 11:07:10 np0005536539 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 26 11:07:10 np0005536539 systemd[1]: Reached target NFS client services.
Nov 26 11:07:10 np0005536539 systemd[1]: Reached target Preparation for Remote File Systems.
Nov 26 11:07:10 np0005536539 systemd[1]: Reached target Remote File Systems.
Nov 26 11:07:10 np0005536539 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 26 11:07:11 np0005536539 NetworkManager[810]: <info>  [1764155231.7876] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 26 11:07:12 np0005536539 NetworkManager[810]: <info>  [1764155232.8425] dhcp6 (eth0): state changed new lease, address=2001:db8::cb
Nov 26 11:07:13 np0005536539 NetworkManager[810]: <info>  [1764155233.9002] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 26 11:07:13 np0005536539 NetworkManager[810]: <info>  [1764155233.9034] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 26 11:07:13 np0005536539 NetworkManager[810]: <info>  [1764155233.9036] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 26 11:07:13 np0005536539 NetworkManager[810]: <info>  [1764155233.9040] manager: NetworkManager state is now CONNECTED_SITE
Nov 26 11:07:13 np0005536539 NetworkManager[810]: <info>  [1764155233.9044] device (eth0): Activation: successful, device activated.
Nov 26 11:07:13 np0005536539 NetworkManager[810]: <info>  [1764155233.9047] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 26 11:07:13 np0005536539 NetworkManager[810]: <info>  [1764155233.9050] manager: startup complete
Nov 26 11:07:13 np0005536539 systemd[1]: Finished Network Manager Wait Online.
Nov 26 11:07:13 np0005536539 systemd[1]: Starting Cloud-init: Network Stage...
Nov 26 11:07:14 np0005536539 cloud-init[876]: Cloud-init v. 24.4-7.el9 running 'init' at Wed, 26 Nov 2025 11:07:14 +0000. Up 8.71 seconds.
Nov 26 11:07:14 np0005536539 cloud-init[876]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Nov 26 11:07:14 np0005536539 cloud-init[876]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 26 11:07:14 np0005536539 cloud-init[876]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Nov 26 11:07:14 np0005536539 cloud-init[876]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 26 11:07:14 np0005536539 cloud-init[876]: ci-info: |  eth0  | True |        192.168.26.91         | 255.255.255.0 | global | fa:16:3e:78:81:05 |
Nov 26 11:07:14 np0005536539 cloud-init[876]: ci-info: |  eth0  | True |       2001:db8::cb/128       |       .       | global | fa:16:3e:78:81:05 |
Nov 26 11:07:14 np0005536539 cloud-init[876]: ci-info: |  eth0  | True | fe80::f816:3eff:fe78:8105/64 |       .       |  link  | fa:16:3e:78:81:05 |
Nov 26 11:07:14 np0005536539 cloud-init[876]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Nov 26 11:07:14 np0005536539 cloud-init[876]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Nov 26 11:07:14 np0005536539 cloud-init[876]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 26 11:07:14 np0005536539 cloud-init[876]: ci-info: ++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Nov 26 11:07:14 np0005536539 cloud-init[876]: ci-info: +-------+-----------------+--------------+-----------------+-----------+-------+
Nov 26 11:07:14 np0005536539 cloud-init[876]: ci-info: | Route |   Destination   |   Gateway    |     Genmask     | Interface | Flags |
Nov 26 11:07:14 np0005536539 cloud-init[876]: ci-info: +-------+-----------------+--------------+-----------------+-----------+-------+
Nov 26 11:07:14 np0005536539 cloud-init[876]: ci-info: |   0   |     0.0.0.0     | 192.168.26.1 |     0.0.0.0     |    eth0   |   UG  |
Nov 26 11:07:14 np0005536539 cloud-init[876]: ci-info: |   1   | 169.254.169.254 | 192.168.26.2 | 255.255.255.255 |    eth0   |  UGH  |
Nov 26 11:07:14 np0005536539 cloud-init[876]: ci-info: |   2   |   192.168.26.0  |   0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Nov 26 11:07:14 np0005536539 cloud-init[876]: ci-info: +-------+-----------------+--------------+-----------------+-----------+-------+
Nov 26 11:07:14 np0005536539 cloud-init[876]: ci-info: +++++++++++++++++++++Route IPv6 info++++++++++++++++++++++
Nov 26 11:07:14 np0005536539 cloud-init[876]: ci-info: +-------+--------------+-------------+-----------+-------+
Nov 26 11:07:14 np0005536539 cloud-init[876]: ci-info: | Route | Destination  |   Gateway   | Interface | Flags |
Nov 26 11:07:14 np0005536539 cloud-init[876]: ci-info: +-------+--------------+-------------+-----------+-------+
Nov 26 11:07:14 np0005536539 cloud-init[876]: ci-info: |   1   | 2001:db8::1  |      ::     |    eth0   |   U   |
Nov 26 11:07:14 np0005536539 cloud-init[876]: ci-info: |   2   | 2001:db8::cb |      ::     |    eth0   |   U   |
Nov 26 11:07:14 np0005536539 cloud-init[876]: ci-info: |   3   |  fe80::/64   |      ::     |    eth0   |   U   |
Nov 26 11:07:14 np0005536539 cloud-init[876]: ci-info: |   4   |     ::/0     | 2001:db8::1 |    eth0   |   UG  |
Nov 26 11:07:14 np0005536539 cloud-init[876]: ci-info: |   6   |    local     |      ::     |    eth0   |   U   |
Nov 26 11:07:14 np0005536539 cloud-init[876]: ci-info: |   7   |    local     |      ::     |    eth0   |   U   |
Nov 26 11:07:14 np0005536539 cloud-init[876]: ci-info: |   8   |  multicast   |      ::     |    eth0   |   U   |
Nov 26 11:07:14 np0005536539 cloud-init[876]: ci-info: +-------+--------------+-------------+-----------+-------+
Nov 26 11:07:14 np0005536539 useradd[943]: new group: name=cloud-user, GID=1001
Nov 26 11:07:14 np0005536539 useradd[943]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Nov 26 11:07:14 np0005536539 useradd[943]: add 'cloud-user' to group 'adm'
Nov 26 11:07:14 np0005536539 useradd[943]: add 'cloud-user' to group 'systemd-journal'
Nov 26 11:07:14 np0005536539 useradd[943]: add 'cloud-user' to shadow group 'adm'
Nov 26 11:07:14 np0005536539 useradd[943]: add 'cloud-user' to shadow group 'systemd-journal'
Nov 26 11:07:15 np0005536539 cloud-init[876]: Generating public/private rsa key pair.
Nov 26 11:07:15 np0005536539 cloud-init[876]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Nov 26 11:07:15 np0005536539 cloud-init[876]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Nov 26 11:07:15 np0005536539 cloud-init[876]: The key fingerprint is:
Nov 26 11:07:15 np0005536539 cloud-init[876]: SHA256:tv7eYrrVZpQUEXS3WkthJ49fYoaK5g2QgFhjaC3vwpE root@np0005536539
Nov 26 11:07:15 np0005536539 cloud-init[876]: The key's randomart image is:
Nov 26 11:07:15 np0005536539 cloud-init[876]: +---[RSA 3072]----+
Nov 26 11:07:15 np0005536539 cloud-init[876]: | +=..      .=oooo|
Nov 26 11:07:15 np0005536539 cloud-init[876]: |o+.o . .     +.=+|
Nov 26 11:07:15 np0005536539 cloud-init[876]: |. +   o     o =+o|
Nov 26 11:07:15 np0005536539 cloud-init[876]: | E .   . . o ++oo|
Nov 26 11:07:15 np0005536539 cloud-init[876]: |. o     S . o. ..|
Nov 26 11:07:15 np0005536539 cloud-init[876]: | o .   + + o     |
Nov 26 11:07:15 np0005536539 cloud-init[876]: |  .     o o +    |
Nov 26 11:07:15 np0005536539 cloud-init[876]: |       . .o+     |
Nov 26 11:07:15 np0005536539 cloud-init[876]: |        +*o..    |
Nov 26 11:07:15 np0005536539 cloud-init[876]: +----[SHA256]-----+
Nov 26 11:07:15 np0005536539 cloud-init[876]: Generating public/private ecdsa key pair.
Nov 26 11:07:15 np0005536539 cloud-init[876]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Nov 26 11:07:15 np0005536539 cloud-init[876]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Nov 26 11:07:15 np0005536539 cloud-init[876]: The key fingerprint is:
Nov 26 11:07:15 np0005536539 cloud-init[876]: SHA256:T7KoPAsccPa/98XktidAsOF+HzIyOj/ThYOS0VEoEM0 root@np0005536539
Nov 26 11:07:15 np0005536539 cloud-init[876]: The key's randomart image is:
Nov 26 11:07:15 np0005536539 cloud-init[876]: +---[ECDSA 256]---+
Nov 26 11:07:15 np0005536539 cloud-init[876]: |    o=   o.      |
Nov 26 11:07:15 np0005536539 cloud-init[876]: |      E =        |
Nov 26 11:07:15 np0005536539 cloud-init[876]: |. o    + =       |
Nov 26 11:07:15 np0005536539 cloud-init[876]: | + .  . + .      |
Nov 26 11:07:15 np0005536539 cloud-init[876]: |  . .  +So...    |
Nov 26 11:07:15 np0005536539 cloud-init[876]: | . . .o.==B+o    |
Nov 26 11:07:15 np0005536539 cloud-init[876]: |  o   oo.=.B=.   |
Nov 26 11:07:15 np0005536539 cloud-init[876]: |   o..o.+ .oo..  |
Nov 26 11:07:15 np0005536539 cloud-init[876]: |    +o.+.+. .o   |
Nov 26 11:07:15 np0005536539 cloud-init[876]: +----[SHA256]-----+
Nov 26 11:07:15 np0005536539 cloud-init[876]: Generating public/private ed25519 key pair.
Nov 26 11:07:15 np0005536539 cloud-init[876]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Nov 26 11:07:15 np0005536539 cloud-init[876]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Nov 26 11:07:15 np0005536539 cloud-init[876]: The key fingerprint is:
Nov 26 11:07:15 np0005536539 cloud-init[876]: SHA256:87z2gzOD03cOdRc4KC+IzYq6o9rPltKg++ZjkuZnxxc root@np0005536539
Nov 26 11:07:15 np0005536539 cloud-init[876]: The key's randomart image is:
Nov 26 11:07:15 np0005536539 cloud-init[876]: +--[ED25519 256]--+
Nov 26 11:07:15 np0005536539 cloud-init[876]: |                 |
Nov 26 11:07:15 np0005536539 cloud-init[876]: |            . .  |
Nov 26 11:07:15 np0005536539 cloud-init[876]: |         . . o . |
Nov 26 11:07:15 np0005536539 cloud-init[876]: |      + . o   . .|
Nov 26 11:07:15 np0005536539 cloud-init[876]: |     . +S. .  . o|
Nov 26 11:07:15 np0005536539 cloud-init[876]: |  . . .E +.  . ..|
Nov 26 11:07:15 np0005536539 cloud-init[876]: | o +.o  .oo..    |
Nov 26 11:07:15 np0005536539 cloud-init[876]: |++*++o .o B.o..  |
Nov 26 11:07:15 np0005536539 cloud-init[876]: |O@O=+ .  o.*.+.  |
Nov 26 11:07:15 np0005536539 cloud-init[876]: +----[SHA256]-----+
Nov 26 11:07:15 np0005536539 chronyd[746]: Received KoD RATE from 168.235.89.132
Nov 26 11:07:15 np0005536539 systemd[1]: Finished Cloud-init: Network Stage.
Nov 26 11:07:15 np0005536539 systemd[1]: Reached target Cloud-config availability.
Nov 26 11:07:15 np0005536539 systemd[1]: Reached target Network is Online.
Nov 26 11:07:15 np0005536539 systemd[1]: Starting Cloud-init: Config Stage...
Nov 26 11:07:15 np0005536539 systemd[1]: Starting Crash recovery kernel arming...
Nov 26 11:07:15 np0005536539 systemd[1]: Starting Notify NFS peers of a restart...
Nov 26 11:07:15 np0005536539 systemd[1]: Starting System Logging Service...
Nov 26 11:07:15 np0005536539 sm-notify[959]: Version 2.5.4 starting
Nov 26 11:07:15 np0005536539 systemd[1]: Starting OpenSSH server daemon...
Nov 26 11:07:15 np0005536539 systemd[1]: Starting Permit User Sessions...
Nov 26 11:07:15 np0005536539 systemd[1]: Started Notify NFS peers of a restart.
Nov 26 11:07:15 np0005536539 sshd[961]: Server listening on 0.0.0.0 port 22.
Nov 26 11:07:15 np0005536539 sshd[961]: Server listening on :: port 22.
Nov 26 11:07:15 np0005536539 systemd[1]: Started OpenSSH server daemon.
Nov 26 11:07:15 np0005536539 systemd[1]: Finished Permit User Sessions.
Nov 26 11:07:15 np0005536539 systemd[1]: Started Command Scheduler.
Nov 26 11:07:15 np0005536539 systemd[1]: Started Getty on tty1.
Nov 26 11:07:15 np0005536539 crond[964]: (CRON) STARTUP (1.5.7)
Nov 26 11:07:15 np0005536539 crond[964]: (CRON) INFO (Syslog will be used instead of sendmail.)
Nov 26 11:07:15 np0005536539 crond[964]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 10% if used.)
Nov 26 11:07:15 np0005536539 crond[964]: (CRON) INFO (running with inotify support)
Nov 26 11:07:15 np0005536539 rsyslogd[960]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="960" x-info="https://www.rsyslog.com"] start
Nov 26 11:07:15 np0005536539 rsyslogd[960]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Nov 26 11:07:15 np0005536539 systemd[1]: Started Serial Getty on ttyS0.
Nov 26 11:07:15 np0005536539 systemd[1]: Reached target Login Prompts.
Nov 26 11:07:15 np0005536539 systemd[1]: Started System Logging Service.
Nov 26 11:07:15 np0005536539 systemd[1]: Reached target Multi-User System.
Nov 26 11:07:15 np0005536539 sshd-session[979]: Unable to negotiate with 192.168.26.11 port 53450: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Nov 26 11:07:15 np0005536539 systemd[1]: Starting Record Runlevel Change in UTMP...
Nov 26 11:07:15 np0005536539 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Nov 26 11:07:15 np0005536539 systemd[1]: Finished Record Runlevel Change in UTMP.
Nov 26 11:07:15 np0005536539 sshd-session[987]: Connection reset by 192.168.26.11 port 53452 [preauth]
Nov 26 11:07:15 np0005536539 sshd-session[993]: Unable to negotiate with 192.168.26.11 port 53464: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Nov 26 11:07:15 np0005536539 sshd-session[996]: Unable to negotiate with 192.168.26.11 port 53478: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Nov 26 11:07:15 np0005536539 chronyd[746]: Selected source 172.234.25.10 (2.centos.pool.ntp.org)
Nov 26 11:07:15 np0005536539 chronyd[746]: System clock wrong by 1.207794 seconds
Nov 26 11:07:16 np0005536539 chronyd[746]: System clock was stepped by 1.207794 seconds
Nov 26 11:07:16 np0005536539 chronyd[746]: System clock TAI offset set to 37 seconds
Nov 26 11:07:16 np0005536539 sshd-session[1008]: Connection reset by 192.168.26.11 port 53498 [preauth]
Nov 26 11:07:16 np0005536539 sshd-session[1020]: Unable to negotiate with 192.168.26.11 port 53506: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Nov 26 11:07:16 np0005536539 rsyslogd[960]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 11:07:16 np0005536539 sshd-session[1023]: Unable to negotiate with 192.168.26.11 port 53514: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Nov 26 11:07:16 np0005536539 sshd-session[966]: Connection closed by 192.168.26.11 port 53446 [preauth]
Nov 26 11:07:16 np0005536539 kdumpctl[970]: kdump: No kdump initial ramdisk found.
Nov 26 11:07:16 np0005536539 kdumpctl[970]: kdump: Rebuilding /boot/initramfs-5.14.0-642.el9.x86_64kdump.img
Nov 26 11:07:16 np0005536539 sshd-session[1002]: Connection closed by 192.168.26.11 port 53488 [preauth]
Nov 26 11:07:16 np0005536539 cloud-init[1110]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Wed, 26 Nov 2025 11:07:16 +0000. Up 10.15 seconds.
Nov 26 11:07:16 np0005536539 systemd[1]: Finished Cloud-init: Config Stage.
Nov 26 11:07:16 np0005536539 systemd[1]: Starting Cloud-init: Final Stage...
Nov 26 11:07:17 np0005536539 dracut[1238]: dracut-057-102.git20250818.el9
Nov 26 11:07:17 np0005536539 cloud-init[1256]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Wed, 26 Nov 2025 11:07:17 +0000. Up 10.46 seconds.
Nov 26 11:07:17 np0005536539 dracut[1240]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-642.el9.x86_64kdump.img 5.14.0-642.el9.x86_64
Nov 26 11:07:17 np0005536539 cloud-init[1285]: #############################################################
Nov 26 11:07:17 np0005536539 cloud-init[1287]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Nov 26 11:07:17 np0005536539 cloud-init[1294]: 256 SHA256:T7KoPAsccPa/98XktidAsOF+HzIyOj/ThYOS0VEoEM0 root@np0005536539 (ECDSA)
Nov 26 11:07:17 np0005536539 cloud-init[1302]: 256 SHA256:87z2gzOD03cOdRc4KC+IzYq6o9rPltKg++ZjkuZnxxc root@np0005536539 (ED25519)
Nov 26 11:07:17 np0005536539 cloud-init[1310]: 3072 SHA256:tv7eYrrVZpQUEXS3WkthJ49fYoaK5g2QgFhjaC3vwpE root@np0005536539 (RSA)
Nov 26 11:07:17 np0005536539 cloud-init[1313]: -----END SSH HOST KEY FINGERPRINTS-----
Nov 26 11:07:17 np0005536539 cloud-init[1315]: #############################################################
Nov 26 11:07:17 np0005536539 cloud-init[1256]: Cloud-init v. 24.4-7.el9 finished at Wed, 26 Nov 2025 11:07:17 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 10.59 seconds
Nov 26 11:07:17 np0005536539 systemd[1]: Finished Cloud-init: Final Stage.
Nov 26 11:07:17 np0005536539 systemd[1]: Reached target Cloud-init target.
Nov 26 11:07:17 np0005536539 dracut[1240]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Nov 26 11:07:17 np0005536539 dracut[1240]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Nov 26 11:07:17 np0005536539 dracut[1240]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Nov 26 11:07:17 np0005536539 dracut[1240]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 26 11:07:17 np0005536539 dracut[1240]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 26 11:07:17 np0005536539 dracut[1240]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 26 11:07:17 np0005536539 dracut[1240]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 26 11:07:17 np0005536539 dracut[1240]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 26 11:07:17 np0005536539 dracut[1240]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 26 11:07:17 np0005536539 dracut[1240]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 26 11:07:17 np0005536539 dracut[1240]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 26 11:07:17 np0005536539 dracut[1240]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 26 11:07:17 np0005536539 dracut[1240]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 26 11:07:17 np0005536539 dracut[1240]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 26 11:07:17 np0005536539 dracut[1240]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Nov 26 11:07:17 np0005536539 dracut[1240]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Nov 26 11:07:17 np0005536539 dracut[1240]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 26 11:07:17 np0005536539 dracut[1240]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 26 11:07:17 np0005536539 dracut[1240]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 26 11:07:17 np0005536539 dracut[1240]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 26 11:07:17 np0005536539 dracut[1240]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 26 11:07:17 np0005536539 dracut[1240]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 26 11:07:17 np0005536539 dracut[1240]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 26 11:07:17 np0005536539 dracut[1240]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 26 11:07:17 np0005536539 dracut[1240]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 26 11:07:17 np0005536539 dracut[1240]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 26 11:07:17 np0005536539 dracut[1240]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 26 11:07:17 np0005536539 dracut[1240]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 26 11:07:17 np0005536539 dracut[1240]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 26 11:07:17 np0005536539 dracut[1240]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 26 11:07:17 np0005536539 dracut[1240]: Module 'resume' will not be installed, because it's in the list to be omitted!
Nov 26 11:07:17 np0005536539 dracut[1240]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Nov 26 11:07:17 np0005536539 dracut[1240]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Nov 26 11:07:17 np0005536539 dracut[1240]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 26 11:07:17 np0005536539 dracut[1240]: memstrack is not available
Nov 26 11:07:17 np0005536539 dracut[1240]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 26 11:07:17 np0005536539 dracut[1240]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 26 11:07:17 np0005536539 dracut[1240]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 26 11:07:17 np0005536539 dracut[1240]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 26 11:07:17 np0005536539 dracut[1240]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 26 11:07:17 np0005536539 dracut[1240]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 26 11:07:17 np0005536539 dracut[1240]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 26 11:07:17 np0005536539 dracut[1240]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 26 11:07:17 np0005536539 dracut[1240]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 26 11:07:17 np0005536539 dracut[1240]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 26 11:07:17 np0005536539 dracut[1240]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 26 11:07:17 np0005536539 dracut[1240]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 26 11:07:17 np0005536539 dracut[1240]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 26 11:07:17 np0005536539 dracut[1240]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 26 11:07:17 np0005536539 dracut[1240]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 26 11:07:17 np0005536539 dracut[1240]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 26 11:07:17 np0005536539 dracut[1240]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 26 11:07:17 np0005536539 dracut[1240]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 26 11:07:17 np0005536539 dracut[1240]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 26 11:07:17 np0005536539 dracut[1240]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 26 11:07:17 np0005536539 dracut[1240]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 26 11:07:17 np0005536539 dracut[1240]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 26 11:07:18 np0005536539 dracut[1240]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 26 11:07:18 np0005536539 dracut[1240]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 26 11:07:18 np0005536539 dracut[1240]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 26 11:07:18 np0005536539 dracut[1240]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 26 11:07:18 np0005536539 dracut[1240]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 26 11:07:18 np0005536539 dracut[1240]: memstrack is not available
Nov 26 11:07:18 np0005536539 dracut[1240]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 26 11:07:18 np0005536539 dracut[1240]: *** Including module: systemd ***
Nov 26 11:07:18 np0005536539 dracut[1240]: *** Including module: fips ***
Nov 26 11:07:18 np0005536539 dracut[1240]: *** Including module: systemd-initrd ***
Nov 26 11:07:18 np0005536539 dracut[1240]: *** Including module: i18n ***
Nov 26 11:07:18 np0005536539 dracut[1240]: *** Including module: drm ***
Nov 26 11:07:18 np0005536539 chronyd[746]: Selected source 23.186.168.125 (2.centos.pool.ntp.org)
Nov 26 11:07:19 np0005536539 dracut[1240]: *** Including module: prefixdevname ***
Nov 26 11:07:19 np0005536539 dracut[1240]: *** Including module: kernel-modules ***
Nov 26 11:07:19 np0005536539 kernel: block vda: the capability attribute has been deprecated.
Nov 26 11:07:19 np0005536539 dracut[1240]: *** Including module: kernel-modules-extra ***
Nov 26 11:07:19 np0005536539 dracut[1240]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Nov 26 11:07:19 np0005536539 dracut[1240]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Nov 26 11:07:19 np0005536539 dracut[1240]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Nov 26 11:07:19 np0005536539 dracut[1240]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Nov 26 11:07:19 np0005536539 dracut[1240]: *** Including module: qemu ***
Nov 26 11:07:19 np0005536539 dracut[1240]: *** Including module: fstab-sys ***
Nov 26 11:07:19 np0005536539 dracut[1240]: *** Including module: rootfs-block ***
Nov 26 11:07:19 np0005536539 dracut[1240]: *** Including module: terminfo ***
Nov 26 11:07:19 np0005536539 dracut[1240]: *** Including module: udev-rules ***
Nov 26 11:07:19 np0005536539 dracut[1240]: Skipping udev rule: 91-permissions.rules
Nov 26 11:07:19 np0005536539 dracut[1240]: Skipping udev rule: 80-drivers-modprobe.rules
Nov 26 11:07:19 np0005536539 dracut[1240]: *** Including module: virtiofs ***
Nov 26 11:07:19 np0005536539 dracut[1240]: *** Including module: dracut-systemd ***
Nov 26 11:07:20 np0005536539 dracut[1240]: *** Including module: usrmount ***
Nov 26 11:07:20 np0005536539 dracut[1240]: *** Including module: base ***
Nov 26 11:07:20 np0005536539 dracut[1240]: *** Including module: fs-lib ***
Nov 26 11:07:20 np0005536539 dracut[1240]: *** Including module: kdumpbase ***
Nov 26 11:07:20 np0005536539 dracut[1240]: *** Including module: microcode_ctl-fw_dir_override ***
Nov 26 11:07:20 np0005536539 dracut[1240]:   microcode_ctl module: mangling fw_dir
Nov 26 11:07:20 np0005536539 dracut[1240]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Nov 26 11:07:20 np0005536539 dracut[1240]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Nov 26 11:07:20 np0005536539 dracut[1240]:     microcode_ctl: configuration "intel" is ignored
Nov 26 11:07:20 np0005536539 dracut[1240]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Nov 26 11:07:20 np0005536539 dracut[1240]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Nov 26 11:07:20 np0005536539 dracut[1240]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Nov 26 11:07:20 np0005536539 dracut[1240]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Nov 26 11:07:20 np0005536539 dracut[1240]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Nov 26 11:07:20 np0005536539 dracut[1240]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Nov 26 11:07:20 np0005536539 dracut[1240]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Nov 26 11:07:20 np0005536539 dracut[1240]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Nov 26 11:07:20 np0005536539 dracut[1240]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Nov 26 11:07:20 np0005536539 dracut[1240]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Nov 26 11:07:20 np0005536539 dracut[1240]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Nov 26 11:07:20 np0005536539 dracut[1240]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Nov 26 11:07:20 np0005536539 dracut[1240]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Nov 26 11:07:20 np0005536539 dracut[1240]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Nov 26 11:07:20 np0005536539 dracut[1240]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Nov 26 11:07:20 np0005536539 dracut[1240]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Nov 26 11:07:20 np0005536539 dracut[1240]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Nov 26 11:07:20 np0005536539 dracut[1240]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Nov 26 11:07:20 np0005536539 dracut[1240]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Nov 26 11:07:20 np0005536539 dracut[1240]: *** Including module: openssl ***
Nov 26 11:07:20 np0005536539 dracut[1240]: *** Including module: shutdown ***
Nov 26 11:07:20 np0005536539 dracut[1240]: *** Including module: squash ***
Nov 26 11:07:20 np0005536539 dracut[1240]: *** Including modules done ***
Nov 26 11:07:20 np0005536539 dracut[1240]: *** Installing kernel module dependencies ***
Nov 26 11:07:21 np0005536539 dracut[1240]: *** Installing kernel module dependencies done ***
Nov 26 11:07:21 np0005536539 dracut[1240]: *** Resolving executable dependencies ***
Nov 26 11:07:21 np0005536539 irqbalance[739]: Cannot change IRQ 45 affinity: Operation not permitted
Nov 26 11:07:21 np0005536539 irqbalance[739]: IRQ 45 affinity is now unmanaged
Nov 26 11:07:21 np0005536539 irqbalance[739]: Cannot change IRQ 44 affinity: Operation not permitted
Nov 26 11:07:21 np0005536539 irqbalance[739]: IRQ 44 affinity is now unmanaged
Nov 26 11:07:21 np0005536539 irqbalance[739]: Cannot change IRQ 42 affinity: Operation not permitted
Nov 26 11:07:21 np0005536539 irqbalance[739]: IRQ 42 affinity is now unmanaged
Nov 26 11:07:22 np0005536539 dracut[1240]: *** Resolving executable dependencies done ***
Nov 26 11:07:22 np0005536539 dracut[1240]: *** Generating early-microcode cpio image ***
Nov 26 11:07:22 np0005536539 dracut[1240]: *** Store current command line parameters ***
Nov 26 11:07:22 np0005536539 dracut[1240]: Stored kernel commandline:
Nov 26 11:07:22 np0005536539 dracut[1240]: No dracut internal kernel commandline stored in the initramfs
Nov 26 11:07:22 np0005536539 dracut[1240]: *** Install squash loader ***
Nov 26 11:07:23 np0005536539 dracut[1240]: *** Squashing the files inside the initramfs ***
Nov 26 11:07:24 np0005536539 dracut[1240]: *** Squashing the files inside the initramfs done ***
Nov 26 11:07:24 np0005536539 dracut[1240]: *** Creating image file '/boot/initramfs-5.14.0-642.el9.x86_64kdump.img' ***
Nov 26 11:07:24 np0005536539 dracut[1240]: *** Hardlinking files ***
Nov 26 11:07:24 np0005536539 dracut[1240]: Mode:           real
Nov 26 11:07:24 np0005536539 dracut[1240]: Files:          50
Nov 26 11:07:24 np0005536539 dracut[1240]: Linked:         0 files
Nov 26 11:07:24 np0005536539 dracut[1240]: Compared:       0 xattrs
Nov 26 11:07:24 np0005536539 dracut[1240]: Compared:       0 files
Nov 26 11:07:24 np0005536539 dracut[1240]: Saved:          0 B
Nov 26 11:07:24 np0005536539 dracut[1240]: Duration:       0.000354 seconds
Nov 26 11:07:24 np0005536539 dracut[1240]: *** Hardlinking files done ***
Nov 26 11:07:24 np0005536539 dracut[1240]: *** Creating initramfs image file '/boot/initramfs-5.14.0-642.el9.x86_64kdump.img' done ***
Nov 26 11:07:25 np0005536539 kdumpctl[970]: kdump: kexec: loaded kdump kernel
Nov 26 11:07:25 np0005536539 kdumpctl[970]: kdump: Starting kdump: [OK]
Nov 26 11:07:25 np0005536539 systemd[1]: Finished Crash recovery kernel arming.
Nov 26 11:07:25 np0005536539 systemd[1]: Startup finished in 1.318s (kernel) + 2.012s (initrd) + 15.066s (userspace) = 18.397s.
Nov 26 11:07:25 np0005536539 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 26 11:07:41 np0005536539 sshd-session[4365]: Accepted publickey for zuul from 192.168.26.12 port 45350 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Nov 26 11:07:41 np0005536539 systemd[1]: Created slice User Slice of UID 1000.
Nov 26 11:07:41 np0005536539 systemd[1]: Starting User Runtime Directory /run/user/1000...
Nov 26 11:07:41 np0005536539 systemd-logind[744]: New session 1 of user zuul.
Nov 26 11:07:41 np0005536539 systemd[1]: Finished User Runtime Directory /run/user/1000.
Nov 26 11:07:41 np0005536539 systemd[1]: Starting User Manager for UID 1000...
Nov 26 11:07:41 np0005536539 systemd[4369]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 26 11:07:41 np0005536539 systemd[4369]: Queued start job for default target Main User Target.
Nov 26 11:07:41 np0005536539 systemd[4369]: Created slice User Application Slice.
Nov 26 11:07:41 np0005536539 systemd[4369]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 26 11:07:41 np0005536539 systemd[4369]: Started Daily Cleanup of User's Temporary Directories.
Nov 26 11:07:41 np0005536539 systemd[4369]: Reached target Paths.
Nov 26 11:07:41 np0005536539 systemd[4369]: Reached target Timers.
Nov 26 11:07:41 np0005536539 systemd[4369]: Starting D-Bus User Message Bus Socket...
Nov 26 11:07:41 np0005536539 systemd[4369]: Starting Create User's Volatile Files and Directories...
Nov 26 11:07:41 np0005536539 systemd[4369]: Finished Create User's Volatile Files and Directories.
Nov 26 11:07:41 np0005536539 systemd[4369]: Listening on D-Bus User Message Bus Socket.
Nov 26 11:07:41 np0005536539 systemd[4369]: Reached target Sockets.
Nov 26 11:07:41 np0005536539 systemd[4369]: Reached target Basic System.
Nov 26 11:07:41 np0005536539 systemd[4369]: Reached target Main User Target.
Nov 26 11:07:41 np0005536539 systemd[4369]: Startup finished in 83ms.
Nov 26 11:07:41 np0005536539 systemd[1]: Started User Manager for UID 1000.
Nov 26 11:07:41 np0005536539 systemd[1]: Started Session 1 of User zuul.
Nov 26 11:07:41 np0005536539 sshd-session[4365]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 26 11:07:41 np0005536539 python3[4451]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 11:07:41 np0005536539 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 26 11:07:43 np0005536539 python3[4481]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 11:07:48 np0005536539 python3[4535]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 11:07:49 np0005536539 python3[4575]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Nov 26 11:07:50 np0005536539 python3[4601]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDR+NL89jgTJDG3txw0DeV3Dh4HX8pIj0EYzd7GG1bcIqQn+E0uPw8sHYIgxtWJKq4CI5bpuimTvm1D6A2V68QHD2Vo48CWmzCu22jLWwX8aaWkkLeLoqlDLLxxKq5YYRnmVhCPy/oaEzck06GEN8Lfn93BsXjwlucRyKMYpLvsN1FkCOiB+DuTSAmSLdZR+oeFwqZ+OrusiUviZgk/Q8neqNv7Qh4Rm3xYhmi0X+ppejUxj+WXaueX01nGm29wOXwCcjHOMcY3tI3zjMBvwSlERzWWJrHUj5/kTdGOttqrZ2H7idQsruSjwYw4P6kKyrASD1OK/uvjteek74XixasvI8CnEphPClC49QE6yFYwga2uwGlPX724Us6owQE3JGblgV8I1Mquo/3hMb1HLEli+INaaymwQ9dYJ92SviW01uB1RN8ZgyeLbsE9pqrK3iPiFMozMx8EqbcEVerB7wajVMOeISn5uSA5nXvgtQU40hiNjn850yAQ7PGUTKRZ/1k= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 11:07:51 np0005536539 python3[4625]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:07:51 np0005536539 python3[4724]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 11:07:51 np0005536539 python3[4795]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764155271.262887-207-21703049208387/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=27d306f1836a4e1abb7edc2ddfc6c48e_id_rsa follow=False checksum=e99beb91eb4629a1bf0812cb927ebc00c9b9efa2 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:07:52 np0005536539 python3[4918]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 11:07:52 np0005536539 python3[4989]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764155271.918845-240-65502816393583/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=27d306f1836a4e1abb7edc2ddfc6c48e_id_rsa.pub follow=False checksum=42779ba9ea6d6a6a703d7f0146f3382efa84fa34 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:07:53 np0005536539 python3[5037]: ansible-ping Invoked with data=pong
Nov 26 11:07:54 np0005536539 python3[5061]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 11:07:55 np0005536539 python3[5115]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Nov 26 11:07:56 np0005536539 python3[5147]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:07:56 np0005536539 python3[5171]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:07:56 np0005536539 python3[5195]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:07:57 np0005536539 python3[5219]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:07:57 np0005536539 python3[5243]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:07:57 np0005536539 python3[5267]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:07:58 np0005536539 sudo[5291]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vekafpkyfvxfezqjvvdklmsrgapwhxcr ; /usr/bin/python3'
Nov 26 11:07:58 np0005536539 sudo[5291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:07:59 np0005536539 python3[5293]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:07:59 np0005536539 sudo[5291]: pam_unix(sudo:session): session closed for user root
Nov 26 11:07:59 np0005536539 sudo[5369]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utgbwmltvhfvfsvclxeikhltfrnnztgv ; /usr/bin/python3'
Nov 26 11:07:59 np0005536539 sudo[5369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:07:59 np0005536539 python3[5371]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 11:07:59 np0005536539 sudo[5369]: pam_unix(sudo:session): session closed for user root
Nov 26 11:07:59 np0005536539 sudo[5442]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvpqrazzuhtslburmgzrhevyeouksstd ; /usr/bin/python3'
Nov 26 11:07:59 np0005536539 sudo[5442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:07:59 np0005536539 python3[5444]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764155279.1860666-21-88326444146151/source follow=False _original_basename=mirror_info.sh.j2 checksum=3f92644b791816833989d215b9a84c589a7b8ebd backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:07:59 np0005536539 sudo[5442]: pam_unix(sudo:session): session closed for user root
Nov 26 11:08:00 np0005536539 python3[5492]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 11:08:00 np0005536539 python3[5516]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 11:08:00 np0005536539 python3[5540]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 11:08:00 np0005536539 python3[5564]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 11:08:01 np0005536539 python3[5588]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 11:08:01 np0005536539 python3[5612]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 11:08:01 np0005536539 python3[5636]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 11:08:01 np0005536539 python3[5660]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 11:08:01 np0005536539 python3[5684]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 11:08:02 np0005536539 python3[5708]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 11:08:02 np0005536539 python3[5732]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 11:08:02 np0005536539 python3[5756]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 11:08:02 np0005536539 python3[5780]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 11:08:03 np0005536539 python3[5804]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 11:08:03 np0005536539 python3[5828]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 11:08:03 np0005536539 python3[5852]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 11:08:03 np0005536539 python3[5876]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 11:08:03 np0005536539 python3[5900]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 11:08:04 np0005536539 python3[5924]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 11:08:04 np0005536539 python3[5948]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 11:08:04 np0005536539 python3[5972]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 11:08:04 np0005536539 python3[5996]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 11:08:04 np0005536539 python3[6020]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 11:08:05 np0005536539 python3[6044]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 11:08:05 np0005536539 python3[6068]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 11:08:05 np0005536539 python3[6092]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 11:08:08 np0005536539 sudo[6116]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eryeplvsijmxxnamcrdztialomsxftse ; /usr/bin/python3'
Nov 26 11:08:08 np0005536539 sudo[6116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:08:08 np0005536539 python3[6118]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 26 11:08:08 np0005536539 systemd[1]: Starting Time & Date Service...
Nov 26 11:08:08 np0005536539 systemd[1]: Started Time & Date Service.
Nov 26 11:08:08 np0005536539 systemd-timedated[6120]: Changed time zone to 'UTC' (UTC).
Nov 26 11:08:08 np0005536539 sudo[6116]: pam_unix(sudo:session): session closed for user root
Nov 26 11:08:08 np0005536539 sudo[6147]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srzujxlnmucldbjtayhwkbecbjxfzscr ; /usr/bin/python3'
Nov 26 11:08:08 np0005536539 sudo[6147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:08:08 np0005536539 python3[6149]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:08:08 np0005536539 sudo[6147]: pam_unix(sudo:session): session closed for user root
Nov 26 11:08:08 np0005536539 python3[6225]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 11:08:08 np0005536539 python3[6296]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1764155288.5965571-153-246794550300402/source _original_basename=tmpntzttkwt follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:08:09 np0005536539 python3[6396]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 11:08:09 np0005536539 python3[6467]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764155289.2060723-183-154125955006319/source _original_basename=tmpamyflgxq follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:08:10 np0005536539 sudo[6567]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyctuvdwmutpsxilaywxngjtzyphtpum ; /usr/bin/python3'
Nov 26 11:08:10 np0005536539 sudo[6567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:08:10 np0005536539 python3[6569]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 11:08:10 np0005536539 sudo[6567]: pam_unix(sudo:session): session closed for user root
Nov 26 11:08:10 np0005536539 sudo[6640]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxszuutkooddpdfqrtkadnkugwmpwehr ; /usr/bin/python3'
Nov 26 11:08:10 np0005536539 sudo[6640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:08:10 np0005536539 python3[6642]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764155290.0141861-231-63748672698591/source _original_basename=tmpbperctt2 follow=False checksum=f1c43f5a1ac0eb6f11d8ddc1f60a23a6df0f727f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:08:10 np0005536539 sudo[6640]: pam_unix(sudo:session): session closed for user root
Nov 26 11:08:10 np0005536539 python3[6690]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:08:11 np0005536539 python3[6716]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:08:11 np0005536539 sudo[6794]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnnmmaexnpsvfiliklyzwwkyqskhitzy ; /usr/bin/python3'
Nov 26 11:08:11 np0005536539 sudo[6794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:08:11 np0005536539 python3[6796]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 11:08:11 np0005536539 sudo[6794]: pam_unix(sudo:session): session closed for user root
Nov 26 11:08:11 np0005536539 sudo[6867]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehejgsayhyaeavjtdveulnvcglfidmlf ; /usr/bin/python3'
Nov 26 11:08:11 np0005536539 sudo[6867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:08:11 np0005536539 python3[6869]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1764155291.2524035-273-130306510106686/source _original_basename=tmpaw8zq6mq follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:08:11 np0005536539 sudo[6867]: pam_unix(sudo:session): session closed for user root
Nov 26 11:08:12 np0005536539 sudo[6918]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhtaqbdkgflatnwwulcmyjivrfliflvq ; /usr/bin/python3'
Nov 26 11:08:12 np0005536539 sudo[6918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:08:12 np0005536539 python3[6920]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163e08-49e2-3cc3-abe6-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:08:12 np0005536539 sudo[6918]: pam_unix(sudo:session): session closed for user root
Nov 26 11:08:12 np0005536539 python3[6948]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                             _uses_shell=True zuul_log_id=fa163e08-49e2-3cc3-abe6-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Nov 26 11:08:13 np0005536539 python3[6976]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:08:30 np0005536539 sudo[7000]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfpekurnppruefancfjcgyrqvvmdwwwz ; /usr/bin/python3'
Nov 26 11:08:30 np0005536539 sudo[7000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:08:30 np0005536539 python3[7002]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:08:30 np0005536539 sudo[7000]: pam_unix(sudo:session): session closed for user root
Nov 26 11:08:38 np0005536539 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 26 11:08:54 np0005536539 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint
Nov 26 11:08:54 np0005536539 kernel: pci 0000:07:00.0: BAR 1 [mem 0x00000000-0x00000fff]
Nov 26 11:08:54 np0005536539 kernel: pci 0000:07:00.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Nov 26 11:08:54 np0005536539 kernel: pci 0000:07:00.0: ROM [mem 0x00000000-0x0003ffff pref]
Nov 26 11:08:54 np0005536539 kernel: pci 0000:07:00.0: ROM [mem 0xfe000000-0xfe03ffff pref]: assigned
Nov 26 11:08:54 np0005536539 kernel: pci 0000:07:00.0: BAR 4 [mem 0xfb600000-0xfb603fff 64bit pref]: assigned
Nov 26 11:08:54 np0005536539 kernel: pci 0000:07:00.0: BAR 1 [mem 0xfe040000-0xfe040fff]: assigned
Nov 26 11:08:54 np0005536539 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002)
Nov 26 11:08:54 np0005536539 NetworkManager[810]: <info>  [1764155334.1298] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 26 11:08:54 np0005536539 systemd-udevd[7005]: Network interface NamePolicy= disabled on kernel command line.
Nov 26 11:08:54 np0005536539 NetworkManager[810]: <info>  [1764155334.1478] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 26 11:08:54 np0005536539 NetworkManager[810]: <info>  [1764155334.1498] settings: (eth1): created default wired connection 'Wired connection 1'
Nov 26 11:08:54 np0005536539 NetworkManager[810]: <info>  [1764155334.1500] device (eth1): carrier: link connected
Nov 26 11:08:54 np0005536539 NetworkManager[810]: <info>  [1764155334.1501] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 26 11:08:54 np0005536539 NetworkManager[810]: <info>  [1764155334.1506] policy: auto-activating connection 'Wired connection 1' (3c01c958-ae07-3583-8f8f-8a96e3659e99)
Nov 26 11:08:54 np0005536539 NetworkManager[810]: <info>  [1764155334.1508] device (eth1): Activation: starting connection 'Wired connection 1' (3c01c958-ae07-3583-8f8f-8a96e3659e99)
Nov 26 11:08:54 np0005536539 NetworkManager[810]: <info>  [1764155334.1509] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 26 11:08:54 np0005536539 NetworkManager[810]: <info>  [1764155334.1510] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 26 11:08:54 np0005536539 NetworkManager[810]: <info>  [1764155334.1513] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 26 11:08:54 np0005536539 NetworkManager[810]: <info>  [1764155334.1517] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 26 11:08:54 np0005536539 python3[7032]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163e08-49e2-4c1f-05c6-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:09:04 np0005536539 sudo[7110]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nydpuhpkfdnswkyddgfjqpdyfxrsufhj ; OS_CLOUD=ibm-bm4-nodepool /usr/bin/python3'
Nov 26 11:09:04 np0005536539 sudo[7110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:09:04 np0005536539 python3[7112]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 11:09:04 np0005536539 sudo[7110]: pam_unix(sudo:session): session closed for user root
Nov 26 11:09:04 np0005536539 sudo[7183]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcgvhrrfntyzuvkovgvvpavydsgnascy ; OS_CLOUD=ibm-bm4-nodepool /usr/bin/python3'
Nov 26 11:09:04 np0005536539 sudo[7183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:09:04 np0005536539 python3[7185]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764155344.1749973-111-267691098066846/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=5d935f93396b79671a095bc7c79152ee5090bd68 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:09:04 np0005536539 sudo[7183]: pam_unix(sudo:session): session closed for user root
Nov 26 11:09:05 np0005536539 sudo[7233]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kudtwtahzcvqchsdtfkkiegucoskjmmh ; OS_CLOUD=ibm-bm4-nodepool /usr/bin/python3'
Nov 26 11:09:05 np0005536539 sudo[7233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:09:05 np0005536539 python3[7235]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 11:09:05 np0005536539 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 26 11:09:05 np0005536539 systemd[1]: Stopped Network Manager Wait Online.
Nov 26 11:09:05 np0005536539 systemd[1]: Stopping Network Manager Wait Online...
Nov 26 11:09:05 np0005536539 NetworkManager[810]: <info>  [1764155345.2961] caught SIGTERM, shutting down normally.
Nov 26 11:09:05 np0005536539 systemd[1]: Stopping Network Manager...
Nov 26 11:09:05 np0005536539 NetworkManager[810]: <info>  [1764155345.2967] dhcp4 (eth0): canceled DHCP transaction
Nov 26 11:09:05 np0005536539 NetworkManager[810]: <info>  [1764155345.2967] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 26 11:09:05 np0005536539 NetworkManager[810]: <info>  [1764155345.2967] dhcp4 (eth0): state changed no lease
Nov 26 11:09:05 np0005536539 NetworkManager[810]: <info>  [1764155345.2968] dhcp6 (eth0): canceled DHCP transaction
Nov 26 11:09:05 np0005536539 NetworkManager[810]: <info>  [1764155345.2968] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 26 11:09:05 np0005536539 NetworkManager[810]: <info>  [1764155345.2968] dhcp6 (eth0): state changed no lease
Nov 26 11:09:05 np0005536539 NetworkManager[810]: <info>  [1764155345.2970] manager: NetworkManager state is now CONNECTING
Nov 26 11:09:05 np0005536539 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 26 11:09:05 np0005536539 NetworkManager[810]: <info>  [1764155345.3157] dhcp4 (eth1): canceled DHCP transaction
Nov 26 11:09:05 np0005536539 NetworkManager[810]: <info>  [1764155345.3157] dhcp4 (eth1): state changed no lease
Nov 26 11:09:05 np0005536539 NetworkManager[810]: <info>  [1764155345.3175] exiting (success)
Nov 26 11:09:05 np0005536539 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 26 11:09:05 np0005536539 systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 26 11:09:05 np0005536539 systemd[1]: Stopped Network Manager.
Nov 26 11:09:05 np0005536539 systemd[1]: Starting Network Manager...
Nov 26 11:09:05 np0005536539 NetworkManager[7251]: <info>  [1764155345.3632] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:85c12273-0edc-4b34-861a-c0940ef400f5)
Nov 26 11:09:05 np0005536539 NetworkManager[7251]: <info>  [1764155345.3634] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /etc/NetworkManager/conf.d/99-cloud-init.conf
Nov 26 11:09:05 np0005536539 NetworkManager[7251]: <info>  [1764155345.3677] manager[0x55d44ed62070]: monitoring kernel firmware directory '/lib/firmware'.
Nov 26 11:09:05 np0005536539 systemd[1]: Starting Hostname Service...
Nov 26 11:09:05 np0005536539 systemd[1]: Started Hostname Service.
Nov 26 11:09:05 np0005536539 NetworkManager[7251]: <info>  [1764155345.4322] hostname: hostname: using hostnamed
Nov 26 11:09:05 np0005536539 NetworkManager[7251]: <info>  [1764155345.4323] hostname: static hostname changed from (none) to "np0005536539"
Nov 26 11:09:05 np0005536539 NetworkManager[7251]: <info>  [1764155345.4326] dns-mgr: init: dns=none,systemd-resolved rc-manager=unmanaged
Nov 26 11:09:05 np0005536539 NetworkManager[7251]: <info>  [1764155345.4329] manager[0x55d44ed62070]: rfkill: Wi-Fi hardware radio set enabled
Nov 26 11:09:05 np0005536539 NetworkManager[7251]: <info>  [1764155345.4329] manager[0x55d44ed62070]: rfkill: WWAN hardware radio set enabled
Nov 26 11:09:05 np0005536539 NetworkManager[7251]: <info>  [1764155345.4350] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 26 11:09:05 np0005536539 NetworkManager[7251]: <info>  [1764155345.4351] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 26 11:09:05 np0005536539 NetworkManager[7251]: <info>  [1764155345.4351] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 26 11:09:05 np0005536539 NetworkManager[7251]: <info>  [1764155345.4351] manager: Networking is enabled by state file
Nov 26 11:09:05 np0005536539 NetworkManager[7251]: <info>  [1764155345.4361] settings: Loaded settings plugin: keyfile (internal)
Nov 26 11:09:05 np0005536539 NetworkManager[7251]: <info>  [1764155345.4364] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 26 11:09:05 np0005536539 NetworkManager[7251]: <info>  [1764155345.4384] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 26 11:09:05 np0005536539 NetworkManager[7251]: <info>  [1764155345.4395] dhcp: init: Using DHCP client 'internal'
Nov 26 11:09:05 np0005536539 NetworkManager[7251]: <info>  [1764155345.4397] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 26 11:09:05 np0005536539 NetworkManager[7251]: <info>  [1764155345.4401] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 26 11:09:05 np0005536539 NetworkManager[7251]: <info>  [1764155345.4405] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 26 11:09:05 np0005536539 NetworkManager[7251]: <info>  [1764155345.4410] device (lo): Activation: starting connection 'lo' (868fb90f-4437-4595-9529-b8bb5b9dbd08)
Nov 26 11:09:05 np0005536539 NetworkManager[7251]: <info>  [1764155345.4416] device (eth0): carrier: link connected
Nov 26 11:09:05 np0005536539 NetworkManager[7251]: <info>  [1764155345.4419] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 26 11:09:05 np0005536539 NetworkManager[7251]: <info>  [1764155345.4422] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 26 11:09:05 np0005536539 NetworkManager[7251]: <info>  [1764155345.4422] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 26 11:09:05 np0005536539 NetworkManager[7251]: <info>  [1764155345.4426] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 26 11:09:05 np0005536539 NetworkManager[7251]: <info>  [1764155345.4435] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 26 11:09:05 np0005536539 NetworkManager[7251]: <info>  [1764155345.4441] device (eth1): carrier: link connected
Nov 26 11:09:05 np0005536539 NetworkManager[7251]: <info>  [1764155345.4444] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 26 11:09:05 np0005536539 NetworkManager[7251]: <info>  [1764155345.4447] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (3c01c958-ae07-3583-8f8f-8a96e3659e99) (indicated)
Nov 26 11:09:05 np0005536539 NetworkManager[7251]: <info>  [1764155345.4452] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 26 11:09:05 np0005536539 NetworkManager[7251]: <info>  [1764155345.4456] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 26 11:09:05 np0005536539 NetworkManager[7251]: <info>  [1764155345.4460] device (eth1): Activation: starting connection 'Wired connection 1' (3c01c958-ae07-3583-8f8f-8a96e3659e99)
Nov 26 11:09:05 np0005536539 systemd[1]: Started Network Manager.
Nov 26 11:09:05 np0005536539 NetworkManager[7251]: <info>  [1764155345.4464] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 26 11:09:05 np0005536539 NetworkManager[7251]: <info>  [1764155345.4489] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 26 11:09:05 np0005536539 NetworkManager[7251]: <info>  [1764155345.4491] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 26 11:09:05 np0005536539 NetworkManager[7251]: <info>  [1764155345.4494] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 26 11:09:05 np0005536539 NetworkManager[7251]: <info>  [1764155345.4496] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 26 11:09:05 np0005536539 NetworkManager[7251]: <info>  [1764155345.4499] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 26 11:09:05 np0005536539 NetworkManager[7251]: <info>  [1764155345.4502] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 26 11:09:05 np0005536539 NetworkManager[7251]: <info>  [1764155345.4505] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 26 11:09:05 np0005536539 NetworkManager[7251]: <info>  [1764155345.4507] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 26 11:09:05 np0005536539 NetworkManager[7251]: <info>  [1764155345.4516] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 26 11:09:05 np0005536539 systemd[1]: Starting Network Manager Wait Online...
Nov 26 11:09:05 np0005536539 NetworkManager[7251]: <info>  [1764155345.4521] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 26 11:09:05 np0005536539 NetworkManager[7251]: <info>  [1764155345.4523] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 26 11:09:05 np0005536539 NetworkManager[7251]: <info>  [1764155345.4526] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 26 11:09:05 np0005536539 NetworkManager[7251]: <info>  [1764155345.4531] policy: set 'System eth0' (eth0) as default for IPv6 routing and DNS
Nov 26 11:09:05 np0005536539 NetworkManager[7251]: <info>  [1764155345.4534] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 26 11:09:05 np0005536539 NetworkManager[7251]: <info>  [1764155345.4545] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 26 11:09:05 np0005536539 NetworkManager[7251]: <info>  [1764155345.4551] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 26 11:09:05 np0005536539 NetworkManager[7251]: <info>  [1764155345.4557] device (lo): Activation: successful, device activated.
Nov 26 11:09:05 np0005536539 NetworkManager[7251]: <info>  [1764155345.4566] dhcp4 (eth0): state changed new lease, address=192.168.26.91
Nov 26 11:09:05 np0005536539 NetworkManager[7251]: <info>  [1764155345.4572] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 26 11:09:05 np0005536539 sudo[7233]: pam_unix(sudo:session): session closed for user root
Nov 26 11:09:05 np0005536539 python3[7307]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163e08-49e2-4c1f-05c6-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:09:06 np0005536539 NetworkManager[7251]: <info>  [1764155346.4913] dhcp6 (eth0): state changed new lease, address=2001:db8::cb
Nov 26 11:09:06 np0005536539 NetworkManager[7251]: <info>  [1764155346.4922] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 26 11:09:06 np0005536539 NetworkManager[7251]: <info>  [1764155346.4942] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 26 11:09:06 np0005536539 NetworkManager[7251]: <info>  [1764155346.4943] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 26 11:09:06 np0005536539 NetworkManager[7251]: <info>  [1764155346.4946] manager: NetworkManager state is now CONNECTED_SITE
Nov 26 11:09:06 np0005536539 NetworkManager[7251]: <info>  [1764155346.4948] device (eth0): Activation: successful, device activated.
Nov 26 11:09:06 np0005536539 NetworkManager[7251]: <info>  [1764155346.4951] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 26 11:09:16 np0005536539 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 26 11:09:35 np0005536539 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 26 11:09:45 np0005536539 sudo[7406]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouqzlzdajfvxkqzfyoezgkhegrkdmizx ; OS_CLOUD=ibm-bm4-nodepool /usr/bin/python3'
Nov 26 11:09:45 np0005536539 sudo[7406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:09:45 np0005536539 python3[7408]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 11:09:45 np0005536539 sudo[7406]: pam_unix(sudo:session): session closed for user root
Nov 26 11:09:45 np0005536539 sudo[7479]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekhogifaltapvtaewwctffbgpqajawtt ; OS_CLOUD=ibm-bm4-nodepool /usr/bin/python3'
Nov 26 11:09:45 np0005536539 sudo[7479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:09:45 np0005536539 python3[7481]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764155385.2289667-273-124900756047592/source _original_basename=tmpztixgws2 follow=False checksum=210fc4672c1ce0e893ca9764bc7ff7a555220d7b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:09:45 np0005536539 sudo[7479]: pam_unix(sudo:session): session closed for user root
Nov 26 11:09:50 np0005536539 NetworkManager[7251]: <info>  [1764155390.6632] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 26 11:09:50 np0005536539 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 26 11:09:50 np0005536539 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 26 11:09:50 np0005536539 NetworkManager[7251]: <info>  [1764155390.6841] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 26 11:09:50 np0005536539 NetworkManager[7251]: <info>  [1764155390.6842] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 26 11:09:50 np0005536539 NetworkManager[7251]: <info>  [1764155390.6847] device (eth1): Activation: successful, device activated.
Nov 26 11:09:50 np0005536539 NetworkManager[7251]: <info>  [1764155390.6850] manager: startup complete
Nov 26 11:09:50 np0005536539 NetworkManager[7251]: <info>  [1764155390.6851] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Nov 26 11:09:50 np0005536539 NetworkManager[7251]: <warn>  [1764155390.6854] device (eth1): Activation: failed for connection 'Wired connection 1'
Nov 26 11:09:50 np0005536539 NetworkManager[7251]: <info>  [1764155390.6858] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Nov 26 11:09:50 np0005536539 systemd[1]: Finished Network Manager Wait Online.
Nov 26 11:09:50 np0005536539 NetworkManager[7251]: <info>  [1764155390.6906] dhcp4 (eth1): canceled DHCP transaction
Nov 26 11:09:50 np0005536539 NetworkManager[7251]: <info>  [1764155390.6907] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 26 11:09:50 np0005536539 NetworkManager[7251]: <info>  [1764155390.6907] dhcp4 (eth1): state changed no lease
Nov 26 11:09:50 np0005536539 NetworkManager[7251]: <info>  [1764155390.6915] policy: auto-activating connection 'ci-private-network' (99a73924-2eda-5c64-a2d8-18ad8013f642)
Nov 26 11:09:50 np0005536539 NetworkManager[7251]: <info>  [1764155390.6919] device (eth1): Activation: starting connection 'ci-private-network' (99a73924-2eda-5c64-a2d8-18ad8013f642)
Nov 26 11:09:50 np0005536539 NetworkManager[7251]: <info>  [1764155390.6920] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 26 11:09:50 np0005536539 NetworkManager[7251]: <info>  [1764155390.6922] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 26 11:09:50 np0005536539 NetworkManager[7251]: <info>  [1764155390.6926] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 26 11:09:50 np0005536539 NetworkManager[7251]: <info>  [1764155390.6931] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 26 11:09:50 np0005536539 NetworkManager[7251]: <info>  [1764155390.6955] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 26 11:09:50 np0005536539 NetworkManager[7251]: <info>  [1764155390.6957] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 26 11:09:50 np0005536539 NetworkManager[7251]: <info>  [1764155390.6961] device (eth1): Activation: successful, device activated.
Nov 26 11:10:00 np0005536539 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 26 11:10:33 np0005536539 chronyd[746]: Selected source 104.131.155.175 (2.centos.pool.ntp.org)
Nov 26 11:10:35 np0005536539 systemd[4369]: Starting Mark boot as successful...
Nov 26 11:10:35 np0005536539 systemd[4369]: Finished Mark boot as successful.
Nov 26 11:10:45 np0005536539 sshd-session[4378]: Received disconnect from 192.168.26.12 port 45350:11: disconnected by user
Nov 26 11:10:45 np0005536539 sshd-session[4378]: Disconnected from user zuul 192.168.26.12 port 45350
Nov 26 11:10:45 np0005536539 sshd-session[4365]: pam_unix(sshd:session): session closed for user zuul
Nov 26 11:10:45 np0005536539 systemd-logind[744]: Session 1 logged out. Waiting for processes to exit.
Nov 26 11:13:35 np0005536539 systemd[4369]: Created slice User Background Tasks Slice.
Nov 26 11:13:35 np0005536539 systemd[4369]: Starting Cleanup of User's Temporary Files and Directories...
Nov 26 11:13:35 np0005536539 systemd[4369]: Finished Cleanup of User's Temporary Files and Directories.
Nov 26 11:15:15 np0005536539 sshd-session[7535]: Accepted publickey for zuul from 192.168.26.12 port 38566 ssh2: RSA SHA256:zabNQ9AdBNRW68Pm3aADxeQV2ZE/dUlv4LQX84ptJZE
Nov 26 11:15:15 np0005536539 systemd-logind[744]: New session 3 of user zuul.
Nov 26 11:15:15 np0005536539 systemd[1]: Started Session 3 of User zuul.
Nov 26 11:15:15 np0005536539 sshd-session[7535]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 26 11:15:15 np0005536539 sudo[7562]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wafinurhwogdggpqnxyatifpivlwkinp ; /usr/bin/python3'
Nov 26 11:15:15 np0005536539 sudo[7562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:15:15 np0005536539 python3[7564]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                             _uses_shell=True zuul_log_id=fa163e08-49e2-2e49-21b5-000000001cce-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:15:15 np0005536539 sudo[7562]: pam_unix(sudo:session): session closed for user root
Nov 26 11:15:15 np0005536539 sudo[7591]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqcaswxlxaznbcyehpyuwsjmmjtkbhqx ; /usr/bin/python3'
Nov 26 11:15:15 np0005536539 sudo[7591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:15:15 np0005536539 python3[7593]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:15:15 np0005536539 sudo[7591]: pam_unix(sudo:session): session closed for user root
Nov 26 11:15:15 np0005536539 sudo[7617]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnotksgukqnsfpqhggozldhywuqyicwo ; /usr/bin/python3'
Nov 26 11:15:15 np0005536539 sudo[7617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:15:15 np0005536539 python3[7619]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:15:15 np0005536539 sudo[7617]: pam_unix(sudo:session): session closed for user root
Nov 26 11:15:15 np0005536539 sudo[7643]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hovmgcwfdkeqnpiditaxuhzckhwfkayh ; /usr/bin/python3'
Nov 26 11:15:15 np0005536539 sudo[7643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:15:16 np0005536539 python3[7645]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:15:16 np0005536539 sudo[7643]: pam_unix(sudo:session): session closed for user root
Nov 26 11:15:16 np0005536539 sudo[7669]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyzbbbkjazgmlwxrytyrcrlpbbhadynd ; /usr/bin/python3'
Nov 26 11:15:16 np0005536539 sudo[7669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:15:16 np0005536539 python3[7671]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:15:16 np0005536539 sudo[7669]: pam_unix(sudo:session): session closed for user root
Nov 26 11:15:16 np0005536539 sudo[7695]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enincdrglhjudcyucloxpzaxddkpukzw ; /usr/bin/python3'
Nov 26 11:15:16 np0005536539 sudo[7695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:15:16 np0005536539 python3[7697]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:15:16 np0005536539 sudo[7695]: pam_unix(sudo:session): session closed for user root
Nov 26 11:15:16 np0005536539 sudo[7773]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iuphgorzonckzqhexyvxwbzqddyqibtl ; /usr/bin/python3'
Nov 26 11:15:16 np0005536539 sudo[7773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:15:17 np0005536539 python3[7775]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 11:15:17 np0005536539 sudo[7773]: pam_unix(sudo:session): session closed for user root
Nov 26 11:15:17 np0005536539 sudo[7846]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uulysmhnmbkkeqrhuigwktfqsrwtoaxt ; /usr/bin/python3'
Nov 26 11:15:17 np0005536539 sudo[7846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:15:17 np0005536539 python3[7848]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764155716.9111068-474-82626761217310/source _original_basename=tmpi1o7xqcq follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:15:17 np0005536539 sudo[7846]: pam_unix(sudo:session): session closed for user root
Nov 26 11:15:17 np0005536539 sudo[7896]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpnsltleukgfhtwxwnabjwazicpytnkg ; /usr/bin/python3'
Nov 26 11:15:17 np0005536539 sudo[7896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:15:18 np0005536539 python3[7898]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 26 11:15:18 np0005536539 systemd[1]: Reloading.
Nov 26 11:15:18 np0005536539 systemd-rc-local-generator[7916]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:15:18 np0005536539 sudo[7896]: pam_unix(sudo:session): session closed for user root
Nov 26 11:15:19 np0005536539 sudo[7952]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnskjfmnpbputxqekxjetdfklarrftmm ; /usr/bin/python3'
Nov 26 11:15:19 np0005536539 sudo[7952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:15:19 np0005536539 python3[7954]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Nov 26 11:15:19 np0005536539 sudo[7952]: pam_unix(sudo:session): session closed for user root
Nov 26 11:15:19 np0005536539 sudo[7978]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulkbabzestipwekxboeqrauyrmrylhce ; /usr/bin/python3'
Nov 26 11:15:19 np0005536539 sudo[7978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:15:19 np0005536539 python3[7980]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                             _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:15:19 np0005536539 sudo[7978]: pam_unix(sudo:session): session closed for user root
Nov 26 11:15:19 np0005536539 sudo[8006]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-levsonhsnrxvfrsimcvvdhtwyyadyune ; /usr/bin/python3'
Nov 26 11:15:19 np0005536539 sudo[8006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:15:19 np0005536539 python3[8008]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                             _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:15:19 np0005536539 sudo[8006]: pam_unix(sudo:session): session closed for user root
Nov 26 11:15:19 np0005536539 sudo[8034]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyqzgbzdisggwocfwyaoulfcdunbbagy ; /usr/bin/python3'
Nov 26 11:15:19 np0005536539 sudo[8034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:15:20 np0005536539 python3[8036]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                             _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:15:20 np0005536539 sudo[8034]: pam_unix(sudo:session): session closed for user root
Nov 26 11:15:20 np0005536539 sudo[8062]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwmnhvrdipqaaurhjqdxieqtjwtrsybe ; /usr/bin/python3'
Nov 26 11:15:20 np0005536539 sudo[8062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:15:20 np0005536539 python3[8064]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                             _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:15:20 np0005536539 sudo[8062]: pam_unix(sudo:session): session closed for user root
Nov 26 11:15:20 np0005536539 python3[8091]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                             _uses_shell=True zuul_log_id=fa163e08-49e2-2e49-21b5-000000001cd5-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:15:21 np0005536539 python3[8121]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 26 11:15:23 np0005536539 sshd-session[7538]: Connection closed by 192.168.26.12 port 38566
Nov 26 11:15:23 np0005536539 sshd-session[7535]: pam_unix(sshd:session): session closed for user zuul
Nov 26 11:15:23 np0005536539 systemd[1]: session-3.scope: Deactivated successfully.
Nov 26 11:15:23 np0005536539 systemd[1]: session-3.scope: Consumed 2.898s CPU time.
Nov 26 11:15:23 np0005536539 systemd-logind[744]: Session 3 logged out. Waiting for processes to exit.
Nov 26 11:15:23 np0005536539 systemd-logind[744]: Removed session 3.
Nov 26 11:15:24 np0005536539 sshd-session[8127]: Accepted publickey for zuul from 192.168.26.12 port 37100 ssh2: RSA SHA256:zabNQ9AdBNRW68Pm3aADxeQV2ZE/dUlv4LQX84ptJZE
Nov 26 11:15:24 np0005536539 systemd-logind[744]: New session 4 of user zuul.
Nov 26 11:15:24 np0005536539 systemd[1]: Started Session 4 of User zuul.
Nov 26 11:15:24 np0005536539 sshd-session[8127]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 26 11:15:24 np0005536539 sudo[8154]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdfmxmowetqppznfbrzfhwpjblfwzlbg ; /usr/bin/python3'
Nov 26 11:15:24 np0005536539 sudo[8154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:15:25 np0005536539 python3[8156]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 26 11:15:31 np0005536539 irqbalance[739]: Cannot change IRQ 43 affinity: Operation not permitted
Nov 26 11:15:31 np0005536539 irqbalance[739]: IRQ 43 affinity is now unmanaged
Nov 26 11:15:42 np0005536539 kernel: SELinux:  Converting 386 SID table entries...
Nov 26 11:15:42 np0005536539 kernel: SELinux:  policy capability network_peer_controls=1
Nov 26 11:15:42 np0005536539 kernel: SELinux:  policy capability open_perms=1
Nov 26 11:15:42 np0005536539 kernel: SELinux:  policy capability extended_socket_class=1
Nov 26 11:15:42 np0005536539 kernel: SELinux:  policy capability always_check_network=0
Nov 26 11:15:42 np0005536539 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 26 11:15:42 np0005536539 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 26 11:15:42 np0005536539 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 26 11:15:49 np0005536539 kernel: SELinux:  Converting 386 SID table entries...
Nov 26 11:15:49 np0005536539 kernel: SELinux:  policy capability network_peer_controls=1
Nov 26 11:15:49 np0005536539 kernel: SELinux:  policy capability open_perms=1
Nov 26 11:15:49 np0005536539 kernel: SELinux:  policy capability extended_socket_class=1
Nov 26 11:15:49 np0005536539 kernel: SELinux:  policy capability always_check_network=0
Nov 26 11:15:49 np0005536539 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 26 11:15:49 np0005536539 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 26 11:15:49 np0005536539 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 26 11:15:56 np0005536539 kernel: SELinux:  Converting 386 SID table entries...
Nov 26 11:15:56 np0005536539 kernel: SELinux:  policy capability network_peer_controls=1
Nov 26 11:15:56 np0005536539 kernel: SELinux:  policy capability open_perms=1
Nov 26 11:15:56 np0005536539 kernel: SELinux:  policy capability extended_socket_class=1
Nov 26 11:15:56 np0005536539 kernel: SELinux:  policy capability always_check_network=0
Nov 26 11:15:56 np0005536539 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 26 11:15:56 np0005536539 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 26 11:15:56 np0005536539 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 26 11:15:56 np0005536539 setsebool[8222]: The virt_use_nfs policy boolean was changed to 1 by root
Nov 26 11:15:56 np0005536539 setsebool[8222]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Nov 26 11:16:05 np0005536539 kernel: SELinux:  Converting 389 SID table entries...
Nov 26 11:16:05 np0005536539 kernel: SELinux:  policy capability network_peer_controls=1
Nov 26 11:16:05 np0005536539 kernel: SELinux:  policy capability open_perms=1
Nov 26 11:16:05 np0005536539 kernel: SELinux:  policy capability extended_socket_class=1
Nov 26 11:16:05 np0005536539 kernel: SELinux:  policy capability always_check_network=0
Nov 26 11:16:05 np0005536539 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 26 11:16:05 np0005536539 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 26 11:16:05 np0005536539 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 26 11:16:17 np0005536539 dbus-broker-launch[733]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 26 11:16:17 np0005536539 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 26 11:16:17 np0005536539 systemd[1]: Starting man-db-cache-update.service...
Nov 26 11:16:17 np0005536539 systemd[1]: Reloading.
Nov 26 11:16:17 np0005536539 systemd-rc-local-generator[8970]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:16:17 np0005536539 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 26 11:16:18 np0005536539 sudo[8154]: pam_unix(sudo:session): session closed for user root
Nov 26 11:16:30 np0005536539 python3[21633]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                              _uses_shell=True zuul_log_id=fa163e08-49e2-b2d9-bdb7-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:16:31 np0005536539 kernel: evm: overlay not supported
Nov 26 11:16:31 np0005536539 systemd[4369]: Starting D-Bus User Message Bus...
Nov 26 11:16:31 np0005536539 dbus-broker-launch[22281]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Nov 26 11:16:31 np0005536539 dbus-broker-launch[22281]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Nov 26 11:16:31 np0005536539 systemd[4369]: Started D-Bus User Message Bus.
Nov 26 11:16:31 np0005536539 dbus-broker-lau[22281]: Ready
Nov 26 11:16:31 np0005536539 systemd[4369]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 26 11:16:31 np0005536539 systemd[4369]: Created slice Slice /user.
Nov 26 11:16:31 np0005536539 systemd[4369]: podman-22223.scope: unit configures an IP firewall, but not running as root.
Nov 26 11:16:31 np0005536539 systemd[4369]: (This warning is only shown for the first unit using IP firewalling.)
Nov 26 11:16:31 np0005536539 systemd[4369]: Started podman-22223.scope.
Nov 26 11:16:31 np0005536539 systemd[4369]: Started podman-pause-25ee6bd8.scope.
Nov 26 11:16:32 np0005536539 sudo[22992]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulyryrakjtwesalvxninhsfjmytolfmt ; /usr/bin/python3'
Nov 26 11:16:32 np0005536539 sudo[22992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:16:32 np0005536539 python3[23008]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                             location = "38.102.83.113:5001"
                                             insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                             location = "38.102.83.113:5001"
                                             insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:16:32 np0005536539 python3[23008]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Nov 26 11:16:32 np0005536539 sudo[22992]: pam_unix(sudo:session): session closed for user root
Nov 26 11:16:32 np0005536539 sshd-session[8130]: Connection closed by 192.168.26.12 port 37100
Nov 26 11:16:32 np0005536539 sshd-session[8127]: pam_unix(sshd:session): session closed for user zuul
Nov 26 11:16:32 np0005536539 systemd-logind[744]: Session 4 logged out. Waiting for processes to exit.
Nov 26 11:16:32 np0005536539 systemd[1]: session-4.scope: Deactivated successfully.
Nov 26 11:16:32 np0005536539 systemd[1]: session-4.scope: Consumed 43.914s CPU time.
Nov 26 11:16:32 np0005536539 systemd-logind[744]: Removed session 4.
Nov 26 11:16:42 np0005536539 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 26 11:16:42 np0005536539 systemd[1]: Finished man-db-cache-update.service.
Nov 26 11:16:42 np0005536539 systemd[1]: man-db-cache-update.service: Consumed 30.277s CPU time.
Nov 26 11:16:42 np0005536539 systemd[1]: run-r3780c465a849412b9a0289923c42adf8.service: Deactivated successfully.
Nov 26 11:16:48 np0005536539 sshd-session[29688]: Connection closed by 192.168.26.201 port 57170 [preauth]
Nov 26 11:16:48 np0005536539 sshd-session[29689]: Connection closed by 192.168.26.201 port 57172 [preauth]
Nov 26 11:16:48 np0005536539 sshd-session[29690]: Unable to negotiate with 192.168.26.201 port 57174: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Nov 26 11:16:48 np0005536539 sshd-session[29691]: Unable to negotiate with 192.168.26.201 port 57184: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Nov 26 11:16:48 np0005536539 sshd-session[29692]: Unable to negotiate with 192.168.26.201 port 57196: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Nov 26 11:16:57 np0005536539 sshd-session[29698]: Accepted publickey for zuul from 192.168.26.12 port 36368 ssh2: RSA SHA256:zabNQ9AdBNRW68Pm3aADxeQV2ZE/dUlv4LQX84ptJZE
Nov 26 11:16:57 np0005536539 systemd-logind[744]: New session 5 of user zuul.
Nov 26 11:16:57 np0005536539 systemd[1]: Started Session 5 of User zuul.
Nov 26 11:16:57 np0005536539 sshd-session[29698]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 26 11:16:57 np0005536539 python3[29725]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCMsaSW8Xm75qwAzKUNAXARupBxcNmAHtlz/DGQex9i+yYbZDeYlshmUUkC/iVkRqeFHNXms7cypIVYVeAqrPZI= zuul@np0005536538
                                              manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 11:16:57 np0005536539 sudo[29749]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvbrjryukontitibkhxuszbizzhxlmdz ; /usr/bin/python3'
Nov 26 11:16:57 np0005536539 sudo[29749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:16:58 np0005536539 python3[29751]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCMsaSW8Xm75qwAzKUNAXARupBxcNmAHtlz/DGQex9i+yYbZDeYlshmUUkC/iVkRqeFHNXms7cypIVYVeAqrPZI= zuul@np0005536538
                                              manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 11:16:58 np0005536539 sudo[29749]: pam_unix(sudo:session): session closed for user root
Nov 26 11:16:58 np0005536539 sudo[29775]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idljhahmvescdidvgamcvtwxvvfctsma ; /usr/bin/python3'
Nov 26 11:16:58 np0005536539 sudo[29775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:16:58 np0005536539 python3[29777]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005536539 update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Nov 26 11:16:58 np0005536539 useradd[29779]: new group: name=cloud-admin, GID=1002
Nov 26 11:16:58 np0005536539 useradd[29779]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Nov 26 11:16:58 np0005536539 sudo[29775]: pam_unix(sudo:session): session closed for user root
Nov 26 11:16:58 np0005536539 sudo[29809]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhygtsrxqtbxloadzlahftfdhlobgnyn ; /usr/bin/python3'
Nov 26 11:16:58 np0005536539 sudo[29809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:16:58 np0005536539 python3[29811]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCMsaSW8Xm75qwAzKUNAXARupBxcNmAHtlz/DGQex9i+yYbZDeYlshmUUkC/iVkRqeFHNXms7cypIVYVeAqrPZI= zuul@np0005536538
                                              manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 11:16:58 np0005536539 sudo[29809]: pam_unix(sudo:session): session closed for user root
Nov 26 11:16:59 np0005536539 sudo[29887]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qphqizjlqiaujbjjkcmrebebwchbxpac ; /usr/bin/python3'
Nov 26 11:16:59 np0005536539 sudo[29887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:16:59 np0005536539 python3[29889]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 11:16:59 np0005536539 sudo[29887]: pam_unix(sudo:session): session closed for user root
Nov 26 11:16:59 np0005536539 sudo[29960]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esdypbonrcmsfhmgivwxnaozvovdkyzf ; /usr/bin/python3'
Nov 26 11:16:59 np0005536539 sudo[29960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:16:59 np0005536539 python3[29962]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764155819.0497594-137-180475026842989/source _original_basename=tmpjpwqrjuo follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:16:59 np0005536539 sudo[29960]: pam_unix(sudo:session): session closed for user root
Nov 26 11:17:00 np0005536539 sudo[30010]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylcdfbppwzzhhpjxiytnqicjwpymwoqp ; /usr/bin/python3'
Nov 26 11:17:00 np0005536539 sudo[30010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:17:00 np0005536539 python3[30012]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Nov 26 11:17:00 np0005536539 systemd[1]: Starting Hostname Service...
Nov 26 11:17:00 np0005536539 systemd[1]: Started Hostname Service.
Nov 26 11:17:00 np0005536539 systemd-hostnamed[30016]: Changed pretty hostname to 'compute-0'
Nov 26 11:17:00 compute-0 systemd-hostnamed[30016]: Hostname set to <compute-0> (static)
Nov 26 11:17:00 compute-0 NetworkManager[7251]: <info>  [1764155820.3362] hostname: static hostname changed from "np0005536539" to "compute-0"
Nov 26 11:17:00 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 26 11:17:00 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 26 11:17:00 compute-0 sudo[30010]: pam_unix(sudo:session): session closed for user root
Nov 26 11:17:00 compute-0 sshd-session[29701]: Connection closed by 192.168.26.12 port 36368
Nov 26 11:17:00 compute-0 sshd-session[29698]: pam_unix(sshd:session): session closed for user zuul
Nov 26 11:17:00 compute-0 systemd[1]: session-5.scope: Deactivated successfully.
Nov 26 11:17:00 compute-0 systemd[1]: session-5.scope: Consumed 1.685s CPU time.
Nov 26 11:17:00 compute-0 systemd-logind[744]: Session 5 logged out. Waiting for processes to exit.
Nov 26 11:17:00 compute-0 systemd-logind[744]: Removed session 5.
Nov 26 11:17:10 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 26 11:17:30 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 26 11:20:43 compute-0 sshd-session[30034]: Accepted publickey for zuul from 192.168.26.201 port 33422 ssh2: RSA SHA256:zabNQ9AdBNRW68Pm3aADxeQV2ZE/dUlv4LQX84ptJZE
Nov 26 11:20:43 compute-0 systemd-logind[744]: New session 6 of user zuul.
Nov 26 11:20:43 compute-0 systemd[1]: Started Session 6 of User zuul.
Nov 26 11:20:43 compute-0 sshd-session[30034]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 26 11:20:43 compute-0 python3[30110]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 11:20:44 compute-0 sudo[30220]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-serzorzzodfhshjejyblecncjadnlecb ; /usr/bin/python3'
Nov 26 11:20:44 compute-0 sudo[30220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:20:45 compute-0 python3[30222]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 11:20:45 compute-0 sudo[30220]: pam_unix(sudo:session): session closed for user root
Nov 26 11:20:45 compute-0 sudo[30293]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvjltyxroimdljonpceqytrdfhhqofil ; /usr/bin/python3'
Nov 26 11:20:45 compute-0 sudo[30293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:20:45 compute-0 python3[30295]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764156044.9036589-33997-161326043855106/source mode=0755 _original_basename=delorean.repo follow=False checksum=cdee622b4b81aba8f448eb3a0d6bf38022474867 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:20:45 compute-0 sudo[30293]: pam_unix(sudo:session): session closed for user root
Nov 26 11:20:45 compute-0 sudo[30319]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aaauxyqyhusadbvqsvujatlybycelozd ; /usr/bin/python3'
Nov 26 11:20:45 compute-0 sudo[30319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:20:45 compute-0 python3[30321]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 11:20:45 compute-0 sudo[30319]: pam_unix(sudo:session): session closed for user root
Nov 26 11:20:45 compute-0 sudo[30392]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qchtzhtqcwatnttsmnbouwmrhgjuupah ; /usr/bin/python3'
Nov 26 11:20:45 compute-0 sudo[30392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:20:45 compute-0 python3[30394]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764156044.9036589-33997-161326043855106/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=717d1fa230cffa8c08764d71bd0b4a50d3a90cae backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:20:45 compute-0 sudo[30392]: pam_unix(sudo:session): session closed for user root
Nov 26 11:20:45 compute-0 sudo[30418]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpeyeqfwwsiokyktxmwyatbibqfdvgpw ; /usr/bin/python3'
Nov 26 11:20:45 compute-0 sudo[30418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:20:46 compute-0 python3[30420]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 11:20:46 compute-0 sudo[30418]: pam_unix(sudo:session): session closed for user root
Nov 26 11:20:46 compute-0 sudo[30491]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usxsbalypulndqbhrsgewqajouumuvvn ; /usr/bin/python3'
Nov 26 11:20:46 compute-0 sudo[30491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:20:46 compute-0 python3[30493]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764156044.9036589-33997-161326043855106/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=8163d09913b97597f86e38eb45c3003e91da783e backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:20:46 compute-0 sudo[30491]: pam_unix(sudo:session): session closed for user root
Nov 26 11:20:46 compute-0 sudo[30517]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqafayfmneifmbrsvutsnrxrpaebzdni ; /usr/bin/python3'
Nov 26 11:20:46 compute-0 sudo[30517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:20:46 compute-0 python3[30519]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 11:20:46 compute-0 sudo[30517]: pam_unix(sudo:session): session closed for user root
Nov 26 11:20:46 compute-0 sudo[30590]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-seqjjlbtoeueydyftrekemjvnvrwzghc ; /usr/bin/python3'
Nov 26 11:20:46 compute-0 sudo[30590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:20:46 compute-0 python3[30592]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764156044.9036589-33997-161326043855106/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=d108d0750ad5b288ccc41bc6534ea307cc51e987 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:20:46 compute-0 sudo[30590]: pam_unix(sudo:session): session closed for user root
Nov 26 11:20:46 compute-0 sudo[30616]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htnevchzrjfyqfyohkxecizzbhaaeqzp ; /usr/bin/python3'
Nov 26 11:20:46 compute-0 sudo[30616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:20:46 compute-0 python3[30618]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 11:20:46 compute-0 sudo[30616]: pam_unix(sudo:session): session closed for user root
Nov 26 11:20:47 compute-0 sudo[30689]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvznowcweguhyhgalzjvgmsansorjbtc ; /usr/bin/python3'
Nov 26 11:20:47 compute-0 sudo[30689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:20:47 compute-0 python3[30691]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764156044.9036589-33997-161326043855106/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=20c3917c672c059a872cf09a437f61890d2f89fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:20:47 compute-0 sudo[30689]: pam_unix(sudo:session): session closed for user root
Nov 26 11:20:47 compute-0 sudo[30715]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dofdufsgnmormxriebewmppdzcdzllfy ; /usr/bin/python3'
Nov 26 11:20:47 compute-0 sudo[30715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:20:47 compute-0 python3[30717]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 11:20:47 compute-0 sudo[30715]: pam_unix(sudo:session): session closed for user root
Nov 26 11:20:47 compute-0 sudo[30788]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqxjclprdlyyaasnczalszevtbdchrzl ; /usr/bin/python3'
Nov 26 11:20:47 compute-0 sudo[30788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:20:47 compute-0 python3[30790]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764156044.9036589-33997-161326043855106/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=4d14f168e8a0e6930d905faffbcdf4fedd6664d0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:20:47 compute-0 sudo[30788]: pam_unix(sudo:session): session closed for user root
Nov 26 11:20:47 compute-0 sudo[30814]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjpmhctlziymqulurankmaprarznuinf ; /usr/bin/python3'
Nov 26 11:20:47 compute-0 sudo[30814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:20:47 compute-0 python3[30816]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 11:20:47 compute-0 sudo[30814]: pam_unix(sudo:session): session closed for user root
Nov 26 11:20:47 compute-0 sudo[30887]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glvkdfeksdtgwiwmqonrtwbtfnxdqzli ; /usr/bin/python3'
Nov 26 11:20:47 compute-0 sudo[30887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:20:47 compute-0 python3[30889]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764156044.9036589-33997-161326043855106/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=6646317362318a9831d66a1804f6bb7dd1b97cd5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:20:47 compute-0 sudo[30887]: pam_unix(sudo:session): session closed for user root
Nov 26 11:20:49 compute-0 sshd-session[30914]: Connection closed by 192.168.122.11 port 54232 [preauth]
Nov 26 11:20:49 compute-0 sshd-session[30915]: Connection closed by 192.168.122.11 port 54246 [preauth]
Nov 26 11:20:49 compute-0 sshd-session[30916]: Unable to negotiate with 192.168.122.11 port 54258: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Nov 26 11:20:49 compute-0 sshd-session[30917]: Unable to negotiate with 192.168.122.11 port 54266: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Nov 26 11:20:49 compute-0 sshd-session[30918]: Unable to negotiate with 192.168.122.11 port 54276: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Nov 26 11:20:59 compute-0 python3[30947]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:22:35 compute-0 systemd[1]: Starting Cleanup of Temporary Directories...
Nov 26 11:22:35 compute-0 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Nov 26 11:22:35 compute-0 systemd[1]: Finished Cleanup of Temporary Directories.
Nov 26 11:22:35 compute-0 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Nov 26 11:25:58 compute-0 sshd-session[30037]: Received disconnect from 192.168.26.201 port 33422:11: disconnected by user
Nov 26 11:25:58 compute-0 sshd-session[30037]: Disconnected from user zuul 192.168.26.201 port 33422
Nov 26 11:25:58 compute-0 sshd-session[30034]: pam_unix(sshd:session): session closed for user zuul
Nov 26 11:25:58 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Nov 26 11:25:58 compute-0 systemd[1]: session-6.scope: Consumed 3.464s CPU time.
Nov 26 11:25:58 compute-0 systemd-logind[744]: Session 6 logged out. Waiting for processes to exit.
Nov 26 11:25:58 compute-0 systemd-logind[744]: Removed session 6.
Nov 26 11:30:38 compute-0 sshd-session[30953]: Accepted publickey for zuul from 192.168.122.30 port 38898 ssh2: ECDSA SHA256:Ri1FttUDYah1mI7cQP4QF8tE4Jqe/KzhJvhCRDggR5A
Nov 26 11:30:38 compute-0 systemd-logind[744]: New session 7 of user zuul.
Nov 26 11:30:38 compute-0 systemd[1]: Started Session 7 of User zuul.
Nov 26 11:30:38 compute-0 sshd-session[30953]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 26 11:30:38 compute-0 python3.9[31106]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 11:30:39 compute-0 sudo[31285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umudbhplyeoeqjffyppupgjasxkpdglg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156639.4301639-32-88427002292437/AnsiballZ_command.py'
Nov 26 11:30:39 compute-0 sudo[31285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:30:39 compute-0 python3.9[31287]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:30:48 compute-0 sudo[31285]: pam_unix(sudo:session): session closed for user root
Nov 26 11:30:48 compute-0 sshd-session[30956]: Connection closed by 192.168.122.30 port 38898
Nov 26 11:30:48 compute-0 sshd-session[30953]: pam_unix(sshd:session): session closed for user zuul
Nov 26 11:30:48 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Nov 26 11:30:48 compute-0 systemd[1]: session-7.scope: Consumed 6.459s CPU time.
Nov 26 11:30:48 compute-0 systemd-logind[744]: Session 7 logged out. Waiting for processes to exit.
Nov 26 11:30:48 compute-0 systemd-logind[744]: Removed session 7.
Nov 26 11:31:03 compute-0 sshd-session[31344]: Accepted publickey for zuul from 192.168.122.30 port 45032 ssh2: ECDSA SHA256:Ri1FttUDYah1mI7cQP4QF8tE4Jqe/KzhJvhCRDggR5A
Nov 26 11:31:03 compute-0 systemd-logind[744]: New session 8 of user zuul.
Nov 26 11:31:03 compute-0 systemd[1]: Started Session 8 of User zuul.
Nov 26 11:31:03 compute-0 sshd-session[31344]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 26 11:31:04 compute-0 python3.9[31497]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 26 11:31:05 compute-0 python3.9[31671]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 11:31:05 compute-0 sudo[31821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmapgiodexhakfsnirajzoaxcjuqzxcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156665.395712-45-191787296804307/AnsiballZ_command.py'
Nov 26 11:31:05 compute-0 sudo[31821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:31:05 compute-0 python3.9[31823]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:31:05 compute-0 sudo[31821]: pam_unix(sudo:session): session closed for user root
Nov 26 11:31:06 compute-0 sudo[31974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crsoryseorrhpgrltcvrplzlrmwuxofj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156666.0919862-57-215188818201529/AnsiballZ_stat.py'
Nov 26 11:31:06 compute-0 sudo[31974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:31:06 compute-0 python3.9[31976]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 11:31:06 compute-0 sudo[31974]: pam_unix(sudo:session): session closed for user root
Nov 26 11:31:06 compute-0 sudo[32126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oddxpfadxetjyvikgbwpuitnsotysjrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156666.6727355-65-271507903014251/AnsiballZ_file.py'
Nov 26 11:31:06 compute-0 sudo[32126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:31:07 compute-0 python3.9[32128]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:31:07 compute-0 sudo[32126]: pam_unix(sudo:session): session closed for user root
Nov 26 11:31:07 compute-0 sudo[32278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aisgfjteqqgsapyxtazgluiiyikaoaju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156667.2663474-73-89487436539454/AnsiballZ_stat.py'
Nov 26 11:31:07 compute-0 sudo[32278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:31:07 compute-0 python3.9[32280]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:31:07 compute-0 sudo[32278]: pam_unix(sudo:session): session closed for user root
Nov 26 11:31:07 compute-0 sudo[32401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnxpxzhsmbnherdhspqdhosvdixffnif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156667.2663474-73-89487436539454/AnsiballZ_copy.py'
Nov 26 11:31:07 compute-0 sudo[32401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:31:08 compute-0 python3.9[32403]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764156667.2663474-73-89487436539454/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:31:08 compute-0 sudo[32401]: pam_unix(sudo:session): session closed for user root
Nov 26 11:31:08 compute-0 sudo[32553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofxcahdsgtihmejthprdmlakkctjnaci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156668.2276218-88-61118448830385/AnsiballZ_setup.py'
Nov 26 11:31:08 compute-0 sudo[32553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:31:08 compute-0 python3.9[32555]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 11:31:08 compute-0 sudo[32553]: pam_unix(sudo:session): session closed for user root
Nov 26 11:31:09 compute-0 sudo[32709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdnzudyfdhuwklybsvledurqrippxvyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156668.9250278-96-9065887370557/AnsiballZ_file.py'
Nov 26 11:31:09 compute-0 sudo[32709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:31:09 compute-0 python3.9[32711]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:31:09 compute-0 sudo[32709]: pam_unix(sudo:session): session closed for user root
Nov 26 11:31:09 compute-0 sudo[32861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flqurubwixyjadkvfavubtxmksfdmxry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156669.434832-105-198683227572563/AnsiballZ_file.py'
Nov 26 11:31:09 compute-0 sudo[32861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:31:09 compute-0 python3.9[32863]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:31:09 compute-0 sudo[32861]: pam_unix(sudo:session): session closed for user root
Nov 26 11:31:10 compute-0 python3.9[33013]: ansible-ansible.builtin.service_facts Invoked
Nov 26 11:31:14 compute-0 python3.9[33266]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:31:15 compute-0 python3.9[33416]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 11:31:15 compute-0 python3.9[33570]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 11:31:16 compute-0 sudo[33726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssxytbljktqxyribdyjrblegmomhdcgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156676.1940477-153-125462695416339/AnsiballZ_setup.py'
Nov 26 11:31:16 compute-0 sudo[33726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:31:16 compute-0 python3.9[33728]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 11:31:16 compute-0 sudo[33726]: pam_unix(sudo:session): session closed for user root
Nov 26 11:31:17 compute-0 sudo[33810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogskdmrcrysqixwmvbeytncmxyqhncoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156676.1940477-153-125462695416339/AnsiballZ_dnf.py'
Nov 26 11:31:17 compute-0 sudo[33810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:31:17 compute-0 python3.9[33812]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 11:32:48 compute-0 systemd[1]: Reloading.
Nov 26 11:32:48 compute-0 systemd-rc-local-generator[34017]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:32:48 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Nov 26 11:32:49 compute-0 systemd[1]: Reloading.
Nov 26 11:32:49 compute-0 systemd-rc-local-generator[34058]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:32:49 compute-0 systemd[1]: Starting dnf makecache...
Nov 26 11:32:49 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Nov 26 11:32:49 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Nov 26 11:32:49 compute-0 systemd[1]: Reloading.
Nov 26 11:32:49 compute-0 systemd-rc-local-generator[34094]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:32:49 compute-0 dnf[34070]: Failed determining last makecache time.
Nov 26 11:32:49 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Nov 26 11:32:49 compute-0 dnf[34070]: delorean-openstack-barbican-42b4c41831408a8e323  21 kB/s | 3.0 kB     00:00
Nov 26 11:32:49 compute-0 dbus-broker-launch[724]: Noticed file-system modification, trigger reload.
Nov 26 11:32:49 compute-0 dbus-broker-launch[724]: Noticed file-system modification, trigger reload.
Nov 26 11:32:49 compute-0 dbus-broker-launch[724]: Noticed file-system modification, trigger reload.
Nov 26 11:32:49 compute-0 dnf[34070]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7  21 kB/s | 3.0 kB     00:00
Nov 26 11:32:49 compute-0 dnf[34070]: delorean-openstack-cinder-1c00d6490d88e436f26ef  21 kB/s | 3.0 kB     00:00
Nov 26 11:32:49 compute-0 dnf[34070]: delorean-python-stevedore-c4acc5639fd2329372142  21 kB/s | 3.0 kB     00:00
Nov 26 11:32:50 compute-0 dnf[34070]: delorean-python-observabilityclient-2f31846d73c  22 kB/s | 3.0 kB     00:00
Nov 26 11:32:50 compute-0 dnf[34070]: delorean-os-net-config-bbae2ed8a159b0435a473f38  22 kB/s | 3.0 kB     00:00
Nov 26 11:32:50 compute-0 dnf[34070]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6  22 kB/s | 3.0 kB     00:00
Nov 26 11:32:50 compute-0 dnf[34070]: delorean-python-designate-tests-tempest-347fdbc  21 kB/s | 3.0 kB     00:00
Nov 26 11:32:50 compute-0 dnf[34070]: delorean-openstack-glance-1fd12c29b339f30fe823e  20 kB/s | 3.0 kB     00:00
Nov 26 11:32:50 compute-0 dnf[34070]: delorean-openstack-keystone-e4b40af0ae3698fbbbb  22 kB/s | 3.0 kB     00:00
Nov 26 11:32:51 compute-0 dnf[34070]: delorean-openstack-manila-3c01b7181572c95dac462  22 kB/s | 3.0 kB     00:00
Nov 26 11:32:51 compute-0 dnf[34070]: delorean-python-whitebox-neutron-tests-tempest-  22 kB/s | 3.0 kB     00:00
Nov 26 11:32:51 compute-0 dnf[34070]: delorean-openstack-octavia-ba397f07a7331190208c  20 kB/s | 3.0 kB     00:00
Nov 26 11:32:51 compute-0 dnf[34070]: delorean-openstack-watcher-c014f81a8647287f6dcc  20 kB/s | 3.0 kB     00:00
Nov 26 11:32:51 compute-0 dnf[34070]: delorean-python-tcib-1124124ec06aadbac34f0d340b  21 kB/s | 3.0 kB     00:00
Nov 26 11:32:51 compute-0 dnf[34070]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158  21 kB/s | 3.0 kB     00:00
Nov 26 11:32:51 compute-0 dnf[34070]: delorean-openstack-swift-dc98a8463506ac520c469a  18 kB/s | 3.0 kB     00:00
Nov 26 11:32:52 compute-0 dnf[34070]: delorean-python-tempestconf-8515371b7cceebd4282  21 kB/s | 3.0 kB     00:00
Nov 26 11:32:52 compute-0 dnf[34070]: delorean-openstack-heat-ui-013accbfd179753bc3f0  21 kB/s | 3.0 kB     00:00
Nov 26 11:32:53 compute-0 dnf[34070]: CentOS Stream 9 - BaseOS                        5.4 kB/s | 7.3 kB     00:01
Nov 26 11:32:53 compute-0 dnf[34070]: CentOS Stream 9 - AppStream                      19 kB/s | 7.4 kB     00:00
Nov 26 11:32:54 compute-0 dnf[34070]: CentOS Stream 9 - CRB                            14 kB/s | 7.2 kB     00:00
Nov 26 11:32:55 compute-0 dnf[34070]: CentOS Stream 9 - Extras packages                18 kB/s | 8.3 kB     00:00
Nov 26 11:32:55 compute-0 dnf[34070]: dlrn-antelope-testing                            20 kB/s | 3.0 kB     00:00
Nov 26 11:32:55 compute-0 dnf[34070]: dlrn-antelope-build-deps                         21 kB/s | 3.0 kB     00:00
Nov 26 11:32:56 compute-0 dnf[34070]: centos9-rabbitmq                                2.0 kB/s | 3.0 kB     00:01
Nov 26 11:32:58 compute-0 dnf[34070]: centos9-storage                                 2.5 kB/s | 3.0 kB     00:01
Nov 26 11:32:59 compute-0 dnf[34070]: centos9-opstools                                2.0 kB/s | 3.0 kB     00:01
Nov 26 11:33:00 compute-0 dnf[34070]: NFV SIG OpenvSwitch                             7.1 kB/s | 3.0 kB     00:00
Nov 26 11:33:02 compute-0 dnf[34070]: repo-setup-centos-appstream                     1.6 kB/s | 4.4 kB     00:02
Nov 26 11:33:04 compute-0 dnf[34070]: repo-setup-centos-baseos                        2.7 kB/s | 3.9 kB     00:01
Nov 26 11:33:04 compute-0 dnf[34070]: repo-setup-centos-highavailability              9.0 kB/s | 3.9 kB     00:00
Nov 26 11:33:05 compute-0 dnf[34070]: repo-setup-centos-powertools                     10 kB/s | 4.3 kB     00:00
Nov 26 11:33:05 compute-0 dnf[34070]: Extra Packages for Enterprise Linux 9 - x86_64   79 kB/s |  33 kB     00:00
Nov 26 11:33:06 compute-0 dnf[34070]: Metadata cache created.
Nov 26 11:33:06 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Nov 26 11:33:06 compute-0 systemd[1]: Finished dnf makecache.
Nov 26 11:33:06 compute-0 systemd[1]: dnf-makecache.service: Consumed 1.357s CPU time.
Nov 26 11:33:34 compute-0 kernel: SELinux:  Converting 2718 SID table entries...
Nov 26 11:33:34 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 26 11:33:34 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 26 11:33:34 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 26 11:33:34 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 26 11:33:34 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 26 11:33:34 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 26 11:33:34 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 26 11:33:34 compute-0 dbus-broker-launch[733]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Nov 26 11:33:34 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 26 11:33:34 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 26 11:33:34 compute-0 systemd[1]: Reloading.
Nov 26 11:33:34 compute-0 systemd-rc-local-generator[34444]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:33:34 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 26 11:33:34 compute-0 sudo[33810]: pam_unix(sudo:session): session closed for user root
Nov 26 11:33:35 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 26 11:33:35 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 26 11:33:35 compute-0 systemd[1]: run-r796e717cd4ef4da7ace4b98238a961ee.service: Deactivated successfully.
Nov 26 11:33:35 compute-0 sudo[35365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnknfjgorigqzqoplcoxgaqmcdwpxzoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156814.978187-165-270914698115717/AnsiballZ_command.py'
Nov 26 11:33:35 compute-0 sudo[35365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:33:35 compute-0 python3.9[35367]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:33:35 compute-0 sudo[35365]: pam_unix(sudo:session): session closed for user root
Nov 26 11:33:36 compute-0 sudo[35646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqcpumjjdtuiregsllhtbmgdsuorkdsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156816.0543797-173-274107208785375/AnsiballZ_selinux.py'
Nov 26 11:33:36 compute-0 sudo[35646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:33:36 compute-0 python3.9[35648]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 26 11:33:36 compute-0 sudo[35646]: pam_unix(sudo:session): session closed for user root
Nov 26 11:33:37 compute-0 sudo[35798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgfuuqcjnjcjxmdrxzdjdzexemrqgjua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156816.9957318-184-236147361182877/AnsiballZ_command.py'
Nov 26 11:33:37 compute-0 sudo[35798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:33:37 compute-0 python3.9[35800]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 26 11:33:37 compute-0 sudo[35798]: pam_unix(sudo:session): session closed for user root
Nov 26 11:33:38 compute-0 sudo[35951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulwvntdihajhcclljftfdaspymuvzfnz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156818.020226-192-35807757363231/AnsiballZ_file.py'
Nov 26 11:33:38 compute-0 sudo[35951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:33:39 compute-0 python3.9[35953]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:33:39 compute-0 sudo[35951]: pam_unix(sudo:session): session closed for user root
Nov 26 11:33:39 compute-0 sudo[36103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-duevhhwyprhswemryeczhynijvvwytdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156819.289367-200-198224702977932/AnsiballZ_mount.py'
Nov 26 11:33:39 compute-0 sudo[36103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:33:39 compute-0 python3.9[36105]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 26 11:33:39 compute-0 sudo[36103]: pam_unix(sudo:session): session closed for user root
Nov 26 11:33:40 compute-0 sudo[36255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhrjarmxcmiooedozlidfqwedtfgfddp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156820.3125787-228-52646695716829/AnsiballZ_file.py'
Nov 26 11:33:40 compute-0 sudo[36255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:33:40 compute-0 python3.9[36257]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:33:40 compute-0 sudo[36255]: pam_unix(sudo:session): session closed for user root
Nov 26 11:33:40 compute-0 sudo[36407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hswbgtcruavimmcvmmuizhbfwwkwpgqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156820.7670743-236-205273387298075/AnsiballZ_stat.py'
Nov 26 11:33:40 compute-0 sudo[36407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:33:41 compute-0 python3.9[36409]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:33:41 compute-0 sudo[36407]: pam_unix(sudo:session): session closed for user root
Nov 26 11:33:41 compute-0 sudo[36530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhnbzaepgfbtjqhvwbxzqqxgwxezabzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156820.7670743-236-205273387298075/AnsiballZ_copy.py'
Nov 26 11:33:41 compute-0 sudo[36530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:33:41 compute-0 python3.9[36532]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764156820.7670743-236-205273387298075/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=cb4a049067962bd1105691478a237a6c6e4bd931 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:33:41 compute-0 sudo[36530]: pam_unix(sudo:session): session closed for user root
Nov 26 11:33:42 compute-0 sudo[36682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ziodqmscemvwkdmkarhhivmnjzontfga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156821.840637-260-168516145419639/AnsiballZ_stat.py'
Nov 26 11:33:42 compute-0 sudo[36682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:33:42 compute-0 python3.9[36684]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 11:33:42 compute-0 sudo[36682]: pam_unix(sudo:session): session closed for user root
Nov 26 11:33:42 compute-0 sudo[36834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqguadnvinooxtuzrjvelxuojudtbtya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156822.3169131-268-116496639846846/AnsiballZ_command.py'
Nov 26 11:33:42 compute-0 sudo[36834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:33:42 compute-0 python3.9[36836]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:33:42 compute-0 sudo[36834]: pam_unix(sudo:session): session closed for user root
Nov 26 11:33:43 compute-0 sudo[36987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvdiqmbdcijozayrquztgzclgisysrcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156822.8533502-276-212975408957579/AnsiballZ_file.py'
Nov 26 11:33:43 compute-0 sudo[36987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:33:43 compute-0 python3.9[36989]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:33:43 compute-0 sudo[36987]: pam_unix(sudo:session): session closed for user root
Nov 26 11:33:43 compute-0 sudo[37139]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsviwpfjdlfhfbmiajaamenvszvqktja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156823.5688214-287-50286267494006/AnsiballZ_getent.py'
Nov 26 11:33:43 compute-0 sudo[37139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:33:45 compute-0 python3.9[37141]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 26 11:33:45 compute-0 sudo[37139]: pam_unix(sudo:session): session closed for user root
Nov 26 11:33:45 compute-0 rsyslogd[960]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 11:33:45 compute-0 rsyslogd[960]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 11:33:46 compute-0 sudo[37293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnxhnlftutpnighwmscqhakujrcnoxzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156826.0996895-295-262423782362889/AnsiballZ_group.py'
Nov 26 11:33:46 compute-0 sudo[37293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:33:46 compute-0 python3.9[37295]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 26 11:33:46 compute-0 groupadd[37296]: group added to /etc/group: name=qemu, GID=107
Nov 26 11:33:46 compute-0 groupadd[37296]: group added to /etc/gshadow: name=qemu
Nov 26 11:33:46 compute-0 groupadd[37296]: new group: name=qemu, GID=107
Nov 26 11:33:46 compute-0 sudo[37293]: pam_unix(sudo:session): session closed for user root
Nov 26 11:33:47 compute-0 sudo[37451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynomaavjxafycfhethpqzcnmwrdxmytz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156826.7237427-303-181014030705415/AnsiballZ_user.py'
Nov 26 11:33:47 compute-0 sudo[37451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:33:47 compute-0 python3.9[37453]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 26 11:33:47 compute-0 useradd[37455]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Nov 26 11:33:47 compute-0 sudo[37451]: pam_unix(sudo:session): session closed for user root
Nov 26 11:33:47 compute-0 sudo[37611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdlazywvdjlivhwotaxamzqnwneuthxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156827.4036446-311-41673574972329/AnsiballZ_getent.py'
Nov 26 11:33:47 compute-0 sudo[37611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:33:47 compute-0 python3.9[37613]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 26 11:33:47 compute-0 sudo[37611]: pam_unix(sudo:session): session closed for user root
Nov 26 11:33:48 compute-0 sudo[37764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amqjgmoqqtxkhytjnndxkfigikhlllas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156827.9837496-319-198150283149657/AnsiballZ_group.py'
Nov 26 11:33:48 compute-0 sudo[37764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:33:48 compute-0 python3.9[37766]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 26 11:33:48 compute-0 groupadd[37767]: group added to /etc/group: name=hugetlbfs, GID=42477
Nov 26 11:33:48 compute-0 groupadd[37767]: group added to /etc/gshadow: name=hugetlbfs
Nov 26 11:33:48 compute-0 groupadd[37767]: new group: name=hugetlbfs, GID=42477
Nov 26 11:33:48 compute-0 sudo[37764]: pam_unix(sudo:session): session closed for user root
Nov 26 11:33:48 compute-0 sudo[37922]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwhsvexenvtmfgvvfhwplczywgfjewzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156828.508772-328-229887078654088/AnsiballZ_file.py'
Nov 26 11:33:48 compute-0 sudo[37922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:33:48 compute-0 python3.9[37924]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 26 11:33:48 compute-0 sudo[37922]: pam_unix(sudo:session): session closed for user root
Nov 26 11:33:49 compute-0 sudo[38074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clovkflmpjmtsiiluvirbfenbusymeyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156829.0677886-339-275036998890609/AnsiballZ_dnf.py'
Nov 26 11:33:49 compute-0 sudo[38074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:33:49 compute-0 python3.9[38076]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 11:33:50 compute-0 sudo[38074]: pam_unix(sudo:session): session closed for user root
Nov 26 11:33:50 compute-0 sudo[38227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qiyryhsgxwzodeujtlmjxaoequrvyszh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156830.8086643-347-27984523005903/AnsiballZ_file.py'
Nov 26 11:33:50 compute-0 sudo[38227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:33:51 compute-0 python3.9[38229]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:33:51 compute-0 sudo[38227]: pam_unix(sudo:session): session closed for user root
Nov 26 11:33:51 compute-0 sudo[38379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqoalovphrguhdukijydybaedeyqcxoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156831.2610037-355-82379002951596/AnsiballZ_stat.py'
Nov 26 11:33:51 compute-0 sudo[38379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:33:51 compute-0 python3.9[38381]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:33:51 compute-0 sudo[38379]: pam_unix(sudo:session): session closed for user root
Nov 26 11:33:51 compute-0 sudo[38502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqcfdtvhmdgjmzqpxptzeqhpxobwhtch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156831.2610037-355-82379002951596/AnsiballZ_copy.py'
Nov 26 11:33:51 compute-0 sudo[38502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:33:51 compute-0 python3.9[38504]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764156831.2610037-355-82379002951596/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:33:51 compute-0 sudo[38502]: pam_unix(sudo:session): session closed for user root
Nov 26 11:33:52 compute-0 sudo[38654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yacgnknhwolmssmqqtiuwauuthzqjysz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156832.0957475-370-190289541837632/AnsiballZ_systemd.py'
Nov 26 11:33:52 compute-0 sudo[38654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:33:52 compute-0 python3.9[38656]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 11:33:52 compute-0 systemd[1]: Starting Load Kernel Modules...
Nov 26 11:33:52 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Nov 26 11:33:52 compute-0 kernel: Bridge firewalling registered
Nov 26 11:33:52 compute-0 systemd-modules-load[38660]: Inserted module 'br_netfilter'
Nov 26 11:33:52 compute-0 systemd[1]: Finished Load Kernel Modules.
Nov 26 11:33:52 compute-0 sudo[38654]: pam_unix(sudo:session): session closed for user root
Nov 26 11:33:53 compute-0 sudo[38813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xncxkxedenhzrilzrhwdlhctzuivwucj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156832.9807053-378-67776848506735/AnsiballZ_stat.py'
Nov 26 11:33:53 compute-0 sudo[38813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:33:53 compute-0 python3.9[38815]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:33:53 compute-0 sudo[38813]: pam_unix(sudo:session): session closed for user root
Nov 26 11:33:53 compute-0 sudo[38936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxglpyydmvylhewhjgqsswuttyiolpkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156832.9807053-378-67776848506735/AnsiballZ_copy.py'
Nov 26 11:33:53 compute-0 sudo[38936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:33:53 compute-0 python3.9[38938]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764156832.9807053-378-67776848506735/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:33:53 compute-0 sudo[38936]: pam_unix(sudo:session): session closed for user root
Nov 26 11:33:54 compute-0 sudo[39088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhnbduskbhbuqsjcztdumgzkoiawfeup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156833.8907576-396-93258341759081/AnsiballZ_dnf.py'
Nov 26 11:33:54 compute-0 sudo[39088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:33:54 compute-0 python3.9[39090]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 11:33:59 compute-0 dbus-broker-launch[724]: Noticed file-system modification, trigger reload.
Nov 26 11:33:59 compute-0 dbus-broker-launch[724]: Noticed file-system modification, trigger reload.
Nov 26 11:34:00 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 26 11:34:00 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 26 11:34:00 compute-0 systemd[1]: Reloading.
Nov 26 11:34:00 compute-0 systemd-rc-local-generator[39148]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:34:00 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 26 11:34:00 compute-0 sudo[39088]: pam_unix(sudo:session): session closed for user root
Nov 26 11:34:01 compute-0 python3.9[40442]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 11:34:01 compute-0 python3.9[41562]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 26 11:34:02 compute-0 python3.9[42633]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 11:34:02 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 26 11:34:02 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 26 11:34:02 compute-0 systemd[1]: man-db-cache-update.service: Consumed 3.146s CPU time.
Nov 26 11:34:02 compute-0 systemd[1]: run-r6580fd04d7a54ee895c5345000a8540e.service: Deactivated successfully.
Nov 26 11:34:02 compute-0 sudo[43251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knzrbdtmrkggjrkjrwgskgwhdjipnirm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156842.5854077-435-64042656163075/AnsiballZ_command.py'
Nov 26 11:34:02 compute-0 sudo[43251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:34:02 compute-0 python3.9[43253]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:34:03 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 26 11:34:03 compute-0 systemd[1]: Starting Authorization Manager...
Nov 26 11:34:03 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 26 11:34:03 compute-0 polkitd[43470]: Started polkitd version 0.117
Nov 26 11:34:03 compute-0 polkitd[43470]: Loading rules from directory /etc/polkit-1/rules.d
Nov 26 11:34:03 compute-0 polkitd[43470]: Loading rules from directory /usr/share/polkit-1/rules.d
Nov 26 11:34:03 compute-0 polkitd[43470]: Finished loading, compiling and executing 2 rules
Nov 26 11:34:03 compute-0 polkitd[43470]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Nov 26 11:34:03 compute-0 systemd[1]: Started Authorization Manager.
Nov 26 11:34:03 compute-0 sudo[43251]: pam_unix(sudo:session): session closed for user root
Nov 26 11:34:03 compute-0 sudo[43634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-haljncrwswgkfjkuxnndnqzvkvhrekze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156843.5184507-444-229256659563538/AnsiballZ_systemd.py'
Nov 26 11:34:03 compute-0 sudo[43634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:34:03 compute-0 python3.9[43636]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 11:34:03 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 26 11:34:04 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Nov 26 11:34:04 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 26 11:34:04 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 26 11:34:04 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 26 11:34:04 compute-0 sudo[43634]: pam_unix(sudo:session): session closed for user root
Nov 26 11:34:04 compute-0 python3.9[43797]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 26 11:34:05 compute-0 sudo[43947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wiemwvlymeqsrzidcpbgjdxsadynymnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156845.8197367-501-137410397139168/AnsiballZ_systemd.py'
Nov 26 11:34:06 compute-0 sudo[43947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:34:06 compute-0 python3.9[43949]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 11:34:06 compute-0 systemd[1]: Reloading.
Nov 26 11:34:06 compute-0 systemd-rc-local-generator[43972]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:34:06 compute-0 sudo[43947]: pam_unix(sudo:session): session closed for user root
Nov 26 11:34:06 compute-0 sudo[44135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehyizgclbedqgfgnmdyajjgcntnwictg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156846.6246753-501-72944301538351/AnsiballZ_systemd.py'
Nov 26 11:34:06 compute-0 sudo[44135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:34:07 compute-0 python3.9[44137]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 11:34:07 compute-0 systemd[1]: Reloading.
Nov 26 11:34:07 compute-0 systemd-rc-local-generator[44165]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:34:07 compute-0 sudo[44135]: pam_unix(sudo:session): session closed for user root
Nov 26 11:34:07 compute-0 sudo[44324]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irryitwszqamurmyjsdoyplbjcdsdeuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156847.3853602-517-83953148536003/AnsiballZ_command.py'
Nov 26 11:34:07 compute-0 sudo[44324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:34:07 compute-0 python3.9[44326]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:34:07 compute-0 sudo[44324]: pam_unix(sudo:session): session closed for user root
Nov 26 11:34:08 compute-0 sudo[44477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djuhovlzynwglcgqcthnxaesuxfypeah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156847.8376095-525-154089015495474/AnsiballZ_command.py'
Nov 26 11:34:08 compute-0 sudo[44477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:34:08 compute-0 python3.9[44479]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:34:08 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Nov 26 11:34:08 compute-0 sudo[44477]: pam_unix(sudo:session): session closed for user root
Nov 26 11:34:08 compute-0 sudo[44630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cytsrtrbvpbydcjtixtpdlggunrszzyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156848.284146-533-123026486747388/AnsiballZ_command.py'
Nov 26 11:34:08 compute-0 sudo[44630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:34:08 compute-0 python3.9[44632]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:34:09 compute-0 sudo[44630]: pam_unix(sudo:session): session closed for user root
Nov 26 11:34:09 compute-0 sudo[44792]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgfjkhplrjkvrmxfbidronsusnxupmrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156849.779404-541-82352603975167/AnsiballZ_command.py'
Nov 26 11:34:09 compute-0 sudo[44792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:34:10 compute-0 python3.9[44794]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:34:10 compute-0 sudo[44792]: pam_unix(sudo:session): session closed for user root
Nov 26 11:34:10 compute-0 sudo[44945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnmgygqweijiwpgknzgvojjlsmsyzigf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156850.2158067-549-241085998211603/AnsiballZ_systemd.py'
Nov 26 11:34:10 compute-0 sudo[44945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:34:10 compute-0 python3.9[44947]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 11:34:10 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 26 11:34:10 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Nov 26 11:34:10 compute-0 systemd[1]: Stopping Apply Kernel Variables...
Nov 26 11:34:10 compute-0 systemd[1]: Starting Apply Kernel Variables...
Nov 26 11:34:10 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 26 11:34:10 compute-0 systemd[1]: Finished Apply Kernel Variables.
Nov 26 11:34:10 compute-0 sudo[44945]: pam_unix(sudo:session): session closed for user root
Nov 26 11:34:10 compute-0 sshd-session[31347]: Connection closed by 192.168.122.30 port 45032
Nov 26 11:34:10 compute-0 sshd-session[31344]: pam_unix(sshd:session): session closed for user zuul
Nov 26 11:34:10 compute-0 systemd-logind[744]: Session 8 logged out. Waiting for processes to exit.
Nov 26 11:34:10 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Nov 26 11:34:10 compute-0 systemd[1]: session-8.scope: Consumed 1min 40.211s CPU time.
Nov 26 11:34:10 compute-0 systemd-logind[744]: Removed session 8.
Nov 26 11:34:16 compute-0 sshd-session[44977]: Accepted publickey for zuul from 192.168.122.30 port 54188 ssh2: ECDSA SHA256:Ri1FttUDYah1mI7cQP4QF8tE4Jqe/KzhJvhCRDggR5A
Nov 26 11:34:16 compute-0 systemd-logind[744]: New session 9 of user zuul.
Nov 26 11:34:16 compute-0 systemd[1]: Started Session 9 of User zuul.
Nov 26 11:34:16 compute-0 sshd-session[44977]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 26 11:34:17 compute-0 python3.9[45130]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 11:34:18 compute-0 sudo[45284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycvycumxahzsvhkhcdziivacqcgxcftw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156857.738754-36-218512984623133/AnsiballZ_getent.py'
Nov 26 11:34:18 compute-0 sudo[45284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:34:18 compute-0 python3.9[45286]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 26 11:34:18 compute-0 sudo[45284]: pam_unix(sudo:session): session closed for user root
Nov 26 11:34:18 compute-0 sudo[45437]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evxcjopeycrjwwxidispdcgbzdeynfxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156858.3292708-44-126407436449837/AnsiballZ_group.py'
Nov 26 11:34:18 compute-0 sudo[45437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:34:18 compute-0 python3.9[45439]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 26 11:34:18 compute-0 groupadd[45440]: group added to /etc/group: name=openvswitch, GID=42476
Nov 26 11:34:18 compute-0 groupadd[45440]: group added to /etc/gshadow: name=openvswitch
Nov 26 11:34:18 compute-0 groupadd[45440]: new group: name=openvswitch, GID=42476
Nov 26 11:34:18 compute-0 sudo[45437]: pam_unix(sudo:session): session closed for user root
Nov 26 11:34:19 compute-0 sudo[45595]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzsjhzqnkkswmhqtejhliexwdztevcvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156858.9405665-52-194560108596856/AnsiballZ_user.py'
Nov 26 11:34:19 compute-0 sudo[45595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:34:19 compute-0 python3.9[45597]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 26 11:34:19 compute-0 useradd[45599]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Nov 26 11:34:19 compute-0 useradd[45599]: add 'openvswitch' to group 'hugetlbfs'
Nov 26 11:34:19 compute-0 useradd[45599]: add 'openvswitch' to shadow group 'hugetlbfs'
Nov 26 11:34:19 compute-0 sudo[45595]: pam_unix(sudo:session): session closed for user root
Nov 26 11:34:19 compute-0 sudo[45755]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqbimdrafbzcvhvyhnhtohaqdtujivzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156859.691901-62-83457101225851/AnsiballZ_setup.py'
Nov 26 11:34:19 compute-0 sudo[45755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:34:20 compute-0 python3.9[45757]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 11:34:20 compute-0 sudo[45755]: pam_unix(sudo:session): session closed for user root
Nov 26 11:34:20 compute-0 sudo[45839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oitacmuonmsampquiyhpzgbrzpqkdpza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156859.691901-62-83457101225851/AnsiballZ_dnf.py'
Nov 26 11:34:20 compute-0 sudo[45839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:34:20 compute-0 python3.9[45841]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 26 11:34:26 compute-0 sudo[45839]: pam_unix(sudo:session): session closed for user root
Nov 26 11:34:26 compute-0 sudo[46005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-askvoomrlelhyqjvhppkjkjyyiddqqxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156866.4943233-76-220428078899279/AnsiballZ_dnf.py'
Nov 26 11:34:26 compute-0 sudo[46005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:34:26 compute-0 python3.9[46007]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 11:34:35 compute-0 kernel: SELinux:  Converting 2730 SID table entries...
Nov 26 11:34:35 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 26 11:34:35 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 26 11:34:35 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 26 11:34:35 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 26 11:34:35 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 26 11:34:35 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 26 11:34:35 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 26 11:34:35 compute-0 groupadd[46030]: group added to /etc/group: name=unbound, GID=993
Nov 26 11:34:35 compute-0 groupadd[46030]: group added to /etc/gshadow: name=unbound
Nov 26 11:34:35 compute-0 groupadd[46030]: new group: name=unbound, GID=993
Nov 26 11:34:35 compute-0 useradd[46037]: new user: name=unbound, UID=993, GID=993, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Nov 26 11:34:35 compute-0 dbus-broker-launch[733]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Nov 26 11:34:35 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Nov 26 11:34:36 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 26 11:34:36 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 26 11:34:36 compute-0 systemd[1]: Reloading.
Nov 26 11:34:36 compute-0 systemd-sysv-generator[46531]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:34:36 compute-0 systemd-rc-local-generator[46528]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:34:36 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 26 11:34:36 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 26 11:34:36 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 26 11:34:36 compute-0 systemd[1]: run-rbcd7d3ca7cdf446a8f315357eb3c26b7.service: Deactivated successfully.
Nov 26 11:34:36 compute-0 sudo[46005]: pam_unix(sudo:session): session closed for user root
Nov 26 11:34:37 compute-0 sudo[47102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aouenggubvffpkjvcphwdugrmvwpkmwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156877.0370774-84-222082733203723/AnsiballZ_systemd.py'
Nov 26 11:34:37 compute-0 sudo[47102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:34:37 compute-0 python3.9[47104]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 26 11:34:37 compute-0 systemd[1]: Reloading.
Nov 26 11:34:37 compute-0 systemd-sysv-generator[47135]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:34:37 compute-0 systemd-rc-local-generator[47131]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:34:37 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Nov 26 11:34:37 compute-0 chown[47147]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Nov 26 11:34:37 compute-0 ovs-ctl[47152]: /etc/openvswitch/conf.db does not exist ... (warning).
Nov 26 11:34:38 compute-0 ovs-ctl[47152]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Nov 26 11:34:38 compute-0 ovs-ctl[47152]: Starting ovsdb-server [  OK  ]
Nov 26 11:34:38 compute-0 ovs-vsctl[47201]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Nov 26 11:34:38 compute-0 ovs-vsctl[47221]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"52e0423b-b2d6-4490-a138-5f72d3aa5a2d\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Nov 26 11:34:38 compute-0 ovs-ctl[47152]: Configuring Open vSwitch system IDs [  OK  ]
Nov 26 11:34:38 compute-0 ovs-ctl[47152]: Enabling remote OVSDB managers [  OK  ]
Nov 26 11:34:38 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Nov 26 11:34:38 compute-0 ovs-vsctl[47227]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Nov 26 11:34:38 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Nov 26 11:34:38 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Nov 26 11:34:38 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Nov 26 11:34:38 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Nov 26 11:34:38 compute-0 ovs-ctl[47272]: Inserting openvswitch module [  OK  ]
Nov 26 11:34:38 compute-0 ovs-ctl[47241]: Starting ovs-vswitchd [  OK  ]
Nov 26 11:34:38 compute-0 ovs-ctl[47241]: Enabling remote OVSDB managers [  OK  ]
Nov 26 11:34:38 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Nov 26 11:34:38 compute-0 ovs-vsctl[47290]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Nov 26 11:34:38 compute-0 systemd[1]: Starting Open vSwitch...
Nov 26 11:34:38 compute-0 systemd[1]: Finished Open vSwitch.
Nov 26 11:34:38 compute-0 sudo[47102]: pam_unix(sudo:session): session closed for user root
Nov 26 11:34:38 compute-0 python3.9[47441]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 11:34:39 compute-0 sudo[47591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pyxifnisoqnoalsfrwauxhvqbghkidwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156879.0793312-102-167784841796022/AnsiballZ_sefcontext.py'
Nov 26 11:34:39 compute-0 sudo[47591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:34:39 compute-0 python3.9[47593]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 26 11:34:40 compute-0 kernel: SELinux:  Converting 2744 SID table entries...
Nov 26 11:34:40 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 26 11:34:40 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 26 11:34:40 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 26 11:34:40 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 26 11:34:40 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 26 11:34:40 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 26 11:34:40 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 26 11:34:40 compute-0 sudo[47591]: pam_unix(sudo:session): session closed for user root
Nov 26 11:34:41 compute-0 python3.9[47748]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 11:34:41 compute-0 sudo[47904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfbtpfgmsyqyrenenlyvzptvdbublhdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156881.4367762-120-139464846972763/AnsiballZ_dnf.py'
Nov 26 11:34:41 compute-0 dbus-broker-launch[733]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Nov 26 11:34:41 compute-0 sudo[47904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:34:41 compute-0 python3.9[47906]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 11:34:42 compute-0 sudo[47904]: pam_unix(sudo:session): session closed for user root
Nov 26 11:34:43 compute-0 sudo[48057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iudgtkvfmcvktvbwkbdukcskoczzzzrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156882.9183476-128-52213677238497/AnsiballZ_command.py'
Nov 26 11:34:43 compute-0 sudo[48057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:34:43 compute-0 python3.9[48059]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:34:43 compute-0 sudo[48057]: pam_unix(sudo:session): session closed for user root
Nov 26 11:34:44 compute-0 sudo[48344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmdybeqvoaobtioanqhcltlmvytegesj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156884.0226662-136-63311872612479/AnsiballZ_file.py'
Nov 26 11:34:44 compute-0 sudo[48344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:34:44 compute-0 python3.9[48346]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 26 11:34:44 compute-0 sudo[48344]: pam_unix(sudo:session): session closed for user root
Nov 26 11:34:45 compute-0 python3.9[48496]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 11:34:45 compute-0 sudo[48648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igpyzbdvzrmriweizintfookemehlype ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156885.3124945-152-59526866369319/AnsiballZ_dnf.py'
Nov 26 11:34:45 compute-0 sudo[48648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:34:45 compute-0 python3.9[48650]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 11:34:48 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 26 11:34:48 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 26 11:34:48 compute-0 systemd[1]: Reloading.
Nov 26 11:34:48 compute-0 systemd-sysv-generator[48691]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:34:48 compute-0 systemd-rc-local-generator[48688]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:34:48 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 26 11:34:49 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 26 11:34:49 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 26 11:34:49 compute-0 systemd[1]: run-rddcc7fec28ee43c2b42e66cd33c1df2b.service: Deactivated successfully.
Nov 26 11:34:49 compute-0 sudo[48648]: pam_unix(sudo:session): session closed for user root
Nov 26 11:34:49 compute-0 sudo[48965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gogujvasvtyazjgzipgdbiwezocpemts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156889.330252-160-118205235738463/AnsiballZ_systemd.py'
Nov 26 11:34:49 compute-0 sudo[48965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:34:49 compute-0 python3.9[48967]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 11:34:49 compute-0 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 26 11:34:49 compute-0 systemd[1]: Stopped Network Manager Wait Online.
Nov 26 11:34:49 compute-0 systemd[1]: Stopping Network Manager Wait Online...
Nov 26 11:34:49 compute-0 NetworkManager[7251]: <info>  [1764156889.7783] caught SIGTERM, shutting down normally.
Nov 26 11:34:49 compute-0 systemd[1]: Stopping Network Manager...
Nov 26 11:34:49 compute-0 NetworkManager[7251]: <info>  [1764156889.7790] dhcp4 (eth0): canceled DHCP transaction
Nov 26 11:34:49 compute-0 NetworkManager[7251]: <info>  [1764156889.7791] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 26 11:34:49 compute-0 NetworkManager[7251]: <info>  [1764156889.7791] dhcp4 (eth0): state changed no lease
Nov 26 11:34:49 compute-0 NetworkManager[7251]: <info>  [1764156889.7792] dhcp6 (eth0): canceled DHCP transaction
Nov 26 11:34:49 compute-0 NetworkManager[7251]: <info>  [1764156889.7792] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 26 11:34:49 compute-0 NetworkManager[7251]: <info>  [1764156889.7792] dhcp6 (eth0): state changed no lease
Nov 26 11:34:49 compute-0 NetworkManager[7251]: <info>  [1764156889.7793] manager: NetworkManager state is now CONNECTED_SITE
Nov 26 11:34:49 compute-0 NetworkManager[7251]: <info>  [1764156889.7818] exiting (success)
Nov 26 11:34:49 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 26 11:34:49 compute-0 systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 26 11:34:49 compute-0 systemd[1]: Stopped Network Manager.
Nov 26 11:34:49 compute-0 systemd[1]: Starting Network Manager...
Nov 26 11:34:49 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 26 11:34:49 compute-0 NetworkManager[48976]: <info>  [1764156889.8179] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:85c12273-0edc-4b34-861a-c0940ef400f5)
Nov 26 11:34:49 compute-0 NetworkManager[48976]: <info>  [1764156889.8181] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /etc/NetworkManager/conf.d/99-cloud-init.conf
Nov 26 11:34:49 compute-0 NetworkManager[48976]: <info>  [1764156889.8220] manager[0x556dfc621010]: monitoring kernel firmware directory '/lib/firmware'.
Nov 26 11:34:49 compute-0 systemd[1]: Starting Hostname Service...
Nov 26 11:34:49 compute-0 systemd[1]: Started Hostname Service.
Nov 26 11:34:49 compute-0 NetworkManager[48976]: <info>  [1764156889.8787] hostname: hostname: using hostnamed
Nov 26 11:34:49 compute-0 NetworkManager[48976]: <info>  [1764156889.8787] hostname: static hostname changed from (none) to "compute-0"
Nov 26 11:34:49 compute-0 NetworkManager[48976]: <info>  [1764156889.8790] dns-mgr: init: dns=none,systemd-resolved rc-manager=unmanaged
Nov 26 11:34:49 compute-0 NetworkManager[48976]: <info>  [1764156889.8792] manager[0x556dfc621010]: rfkill: Wi-Fi hardware radio set enabled
Nov 26 11:34:49 compute-0 NetworkManager[48976]: <info>  [1764156889.8793] manager[0x556dfc621010]: rfkill: WWAN hardware radio set enabled
Nov 26 11:34:49 compute-0 NetworkManager[48976]: <info>  [1764156889.8806] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Nov 26 11:34:49 compute-0 NetworkManager[48976]: <info>  [1764156889.8813] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 26 11:34:49 compute-0 NetworkManager[48976]: <info>  [1764156889.8813] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 26 11:34:49 compute-0 NetworkManager[48976]: <info>  [1764156889.8813] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 26 11:34:49 compute-0 NetworkManager[48976]: <info>  [1764156889.8814] manager: Networking is enabled by state file
Nov 26 11:34:49 compute-0 NetworkManager[48976]: <info>  [1764156889.8815] settings: Loaded settings plugin: keyfile (internal)
Nov 26 11:34:49 compute-0 NetworkManager[48976]: <info>  [1764156889.8818] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 26 11:34:49 compute-0 NetworkManager[48976]: <info>  [1764156889.8834] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 26 11:34:49 compute-0 NetworkManager[48976]: <info>  [1764156889.8841] dhcp: init: Using DHCP client 'internal'
Nov 26 11:34:49 compute-0 NetworkManager[48976]: <info>  [1764156889.8842] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 26 11:34:49 compute-0 NetworkManager[48976]: <info>  [1764156889.8845] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 26 11:34:49 compute-0 NetworkManager[48976]: <info>  [1764156889.8849] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 26 11:34:49 compute-0 NetworkManager[48976]: <info>  [1764156889.8853] device (lo): Activation: starting connection 'lo' (868fb90f-4437-4595-9529-b8bb5b9dbd08)
Nov 26 11:34:49 compute-0 NetworkManager[48976]: <info>  [1764156889.8858] device (eth0): carrier: link connected
Nov 26 11:34:49 compute-0 NetworkManager[48976]: <info>  [1764156889.8861] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 26 11:34:49 compute-0 NetworkManager[48976]: <info>  [1764156889.8864] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 26 11:34:49 compute-0 NetworkManager[48976]: <info>  [1764156889.8864] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 26 11:34:49 compute-0 NetworkManager[48976]: <info>  [1764156889.8868] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 26 11:34:49 compute-0 NetworkManager[48976]: <info>  [1764156889.8871] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 26 11:34:49 compute-0 NetworkManager[48976]: <info>  [1764156889.8875] device (eth1): carrier: link connected
Nov 26 11:34:49 compute-0 NetworkManager[48976]: <info>  [1764156889.8878] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 26 11:34:49 compute-0 NetworkManager[48976]: <info>  [1764156889.8881] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (99a73924-2eda-5c64-a2d8-18ad8013f642) (indicated)
Nov 26 11:34:49 compute-0 NetworkManager[48976]: <info>  [1764156889.8881] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 26 11:34:49 compute-0 NetworkManager[48976]: <info>  [1764156889.8884] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 26 11:34:49 compute-0 NetworkManager[48976]: <info>  [1764156889.8888] device (eth1): Activation: starting connection 'ci-private-network' (99a73924-2eda-5c64-a2d8-18ad8013f642)
Nov 26 11:34:49 compute-0 systemd[1]: Started Network Manager.
Nov 26 11:34:49 compute-0 NetworkManager[48976]: <info>  [1764156889.8892] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 26 11:34:49 compute-0 NetworkManager[48976]: <info>  [1764156889.8895] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 26 11:34:49 compute-0 NetworkManager[48976]: <info>  [1764156889.8896] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 26 11:34:49 compute-0 NetworkManager[48976]: <info>  [1764156889.8898] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 26 11:34:49 compute-0 NetworkManager[48976]: <info>  [1764156889.8899] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 26 11:34:49 compute-0 NetworkManager[48976]: <info>  [1764156889.8900] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 26 11:34:49 compute-0 NetworkManager[48976]: <info>  [1764156889.8902] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 26 11:34:49 compute-0 NetworkManager[48976]: <info>  [1764156889.8903] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 26 11:34:49 compute-0 NetworkManager[48976]: <info>  [1764156889.8904] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 26 11:34:49 compute-0 NetworkManager[48976]: <info>  [1764156889.8907] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 26 11:34:49 compute-0 NetworkManager[48976]: <info>  [1764156889.8909] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 26 11:34:49 compute-0 NetworkManager[48976]: <info>  [1764156889.8911] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 26 11:34:49 compute-0 NetworkManager[48976]: <info>  [1764156889.8916] policy: set 'System eth0' (eth0) as default for IPv6 routing and DNS
Nov 26 11:34:49 compute-0 NetworkManager[48976]: <info>  [1764156889.8919] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 26 11:34:49 compute-0 NetworkManager[48976]: <info>  [1764156889.8923] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 26 11:34:49 compute-0 NetworkManager[48976]: <info>  [1764156889.8934] dhcp4 (eth0): state changed new lease, address=192.168.26.91
Nov 26 11:34:49 compute-0 NetworkManager[48976]: <info>  [1764156889.8942] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 26 11:34:49 compute-0 NetworkManager[48976]: <info>  [1764156889.8960] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 26 11:34:49 compute-0 NetworkManager[48976]: <info>  [1764156889.8961] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 26 11:34:49 compute-0 NetworkManager[48976]: <info>  [1764156889.8962] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 26 11:34:49 compute-0 NetworkManager[48976]: <info>  [1764156889.8964] device (lo): Activation: successful, device activated.
Nov 26 11:34:49 compute-0 NetworkManager[48976]: <info>  [1764156889.8968] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 26 11:34:49 compute-0 NetworkManager[48976]: <info>  [1764156889.8969] manager: NetworkManager state is now CONNECTED_LOCAL
Nov 26 11:34:49 compute-0 NetworkManager[48976]: <info>  [1764156889.8971] device (eth1): Activation: successful, device activated.
Nov 26 11:34:49 compute-0 systemd[1]: Starting Network Manager Wait Online...
Nov 26 11:34:49 compute-0 sudo[48965]: pam_unix(sudo:session): session closed for user root
Nov 26 11:34:50 compute-0 sudo[49174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smmcebvercggektmymzpvgehltqyzmqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156890.044123-168-24895876646719/AnsiballZ_dnf.py'
Nov 26 11:34:50 compute-0 sudo[49174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:34:50 compute-0 python3.9[49176]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 11:34:50 compute-0 NetworkManager[48976]: <info>  [1764156890.9146] dhcp6 (eth0): state changed new lease, address=2001:db8::cb
Nov 26 11:34:50 compute-0 NetworkManager[48976]: <info>  [1764156890.9155] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 26 11:34:50 compute-0 NetworkManager[48976]: <info>  [1764156890.9185] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 26 11:34:50 compute-0 NetworkManager[48976]: <info>  [1764156890.9186] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 26 11:34:50 compute-0 NetworkManager[48976]: <info>  [1764156890.9188] manager: NetworkManager state is now CONNECTED_SITE
Nov 26 11:34:50 compute-0 NetworkManager[48976]: <info>  [1764156890.9191] device (eth0): Activation: successful, device activated.
Nov 26 11:34:50 compute-0 NetworkManager[48976]: <info>  [1764156890.9193] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 26 11:34:50 compute-0 NetworkManager[48976]: <info>  [1764156890.9194] manager: startup complete
Nov 26 11:34:50 compute-0 systemd[1]: Finished Network Manager Wait Online.
Nov 26 11:34:57 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 26 11:34:57 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 26 11:34:57 compute-0 systemd[1]: Reloading.
Nov 26 11:34:57 compute-0 systemd-sysv-generator[49244]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:34:57 compute-0 systemd-rc-local-generator[49241]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:34:57 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 26 11:34:57 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 26 11:34:57 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 26 11:34:57 compute-0 systemd[1]: run-rbb30772a29294ae3a8bd99a54b870068.service: Deactivated successfully.
Nov 26 11:34:58 compute-0 sudo[49174]: pam_unix(sudo:session): session closed for user root
Nov 26 11:34:58 compute-0 sudo[49652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfzvzkwmosvunobdhitsesdbupfblkud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156898.5682895-180-193568774848378/AnsiballZ_stat.py'
Nov 26 11:34:58 compute-0 sudo[49652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:34:58 compute-0 python3.9[49654]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 11:34:58 compute-0 sudo[49652]: pam_unix(sudo:session): session closed for user root
Nov 26 11:34:59 compute-0 sudo[49804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwxufmcdiqwficmjsusjznaokpzziwkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156899.0341604-189-17339819209256/AnsiballZ_ini_file.py'
Nov 26 11:34:59 compute-0 sudo[49804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:34:59 compute-0 python3.9[49806]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:34:59 compute-0 sudo[49804]: pam_unix(sudo:session): session closed for user root
Nov 26 11:34:59 compute-0 sudo[49958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zylnibyljrxvqmjjdjujwkvvbugcllos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156899.6772323-199-203052421522262/AnsiballZ_ini_file.py'
Nov 26 11:34:59 compute-0 sudo[49958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:34:59 compute-0 python3.9[49960]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:35:00 compute-0 sudo[49958]: pam_unix(sudo:session): session closed for user root
Nov 26 11:35:00 compute-0 sudo[50110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vowrydznoptrnltzqkcbpyepjyeuzzuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156900.097633-199-79331713199173/AnsiballZ_ini_file.py'
Nov 26 11:35:00 compute-0 sudo[50110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:35:00 compute-0 python3.9[50112]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:35:00 compute-0 sudo[50110]: pam_unix(sudo:session): session closed for user root
Nov 26 11:35:00 compute-0 sudo[50264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukhxijwyhwrqcycexmgamsivgptuqmzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156900.5456834-214-272512661553591/AnsiballZ_ini_file.py'
Nov 26 11:35:00 compute-0 sudo[50264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:35:00 compute-0 python3.9[50266]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:35:00 compute-0 sudo[50264]: pam_unix(sudo:session): session closed for user root
Nov 26 11:35:00 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 26 11:35:01 compute-0 sudo[50416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imunindxlyjrjbbfojfobtnnprxuurbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156900.9644177-214-400129554778/AnsiballZ_ini_file.py'
Nov 26 11:35:01 compute-0 sudo[50416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:35:01 compute-0 python3.9[50418]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:35:01 compute-0 sudo[50416]: pam_unix(sudo:session): session closed for user root
Nov 26 11:35:01 compute-0 sudo[50568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gaflczjajeikaepngtpklpmherockfsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156901.3943884-229-164744154440875/AnsiballZ_stat.py'
Nov 26 11:35:01 compute-0 sudo[50568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:35:01 compute-0 python3.9[50570]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:35:01 compute-0 sudo[50568]: pam_unix(sudo:session): session closed for user root
Nov 26 11:35:02 compute-0 sudo[50691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgemrkvfkughakncyjqgpkhgbtzyjmda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156901.3943884-229-164744154440875/AnsiballZ_copy.py'
Nov 26 11:35:02 compute-0 sudo[50691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:35:02 compute-0 python3.9[50693]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764156901.3943884-229-164744154440875/.source _original_basename=.febst2t3 follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:35:02 compute-0 sudo[50691]: pam_unix(sudo:session): session closed for user root
Nov 26 11:35:02 compute-0 sudo[50843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmkvuhjckrbxdwbqmdqkrmogzmnukqhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156902.3050013-244-214738538754806/AnsiballZ_file.py'
Nov 26 11:35:02 compute-0 sudo[50843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:35:02 compute-0 python3.9[50845]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:35:02 compute-0 sudo[50843]: pam_unix(sudo:session): session closed for user root
Nov 26 11:35:03 compute-0 sudo[50995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aeglrzppaxdlxkoroyzmrqeukixywdkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156902.7520664-252-231569825578647/AnsiballZ_edpm_os_net_config_mappings.py'
Nov 26 11:35:03 compute-0 sudo[50995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:35:03 compute-0 python3.9[50997]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Nov 26 11:35:03 compute-0 sudo[50995]: pam_unix(sudo:session): session closed for user root
Nov 26 11:35:03 compute-0 sudo[51147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbexbgsubjnhvnfxqvyiqplimegfqiaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156903.3409646-261-150882165087730/AnsiballZ_file.py'
Nov 26 11:35:03 compute-0 sudo[51147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:35:03 compute-0 python3.9[51149]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:35:03 compute-0 sudo[51147]: pam_unix(sudo:session): session closed for user root
Nov 26 11:35:04 compute-0 sudo[51299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypiqhvlyjiruzniezzxlrifeyqvkbvhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156903.880688-271-182268057389842/AnsiballZ_stat.py'
Nov 26 11:35:04 compute-0 sudo[51299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:35:04 compute-0 sudo[51299]: pam_unix(sudo:session): session closed for user root
Nov 26 11:35:04 compute-0 sudo[51422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqdymsgmwdkauaqfcjmjzhytgabtowjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156903.880688-271-182268057389842/AnsiballZ_copy.py'
Nov 26 11:35:04 compute-0 sudo[51422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:35:04 compute-0 sudo[51422]: pam_unix(sudo:session): session closed for user root
Nov 26 11:35:04 compute-0 sudo[51574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgtshxaigfhoihnmajdwgtggoawxoebj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156904.6970222-286-269519592238813/AnsiballZ_slurp.py'
Nov 26 11:35:04 compute-0 sudo[51574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:35:05 compute-0 python3.9[51576]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Nov 26 11:35:05 compute-0 sudo[51574]: pam_unix(sudo:session): session closed for user root
Nov 26 11:35:05 compute-0 sudo[51749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmywpjgmcohfzsgjdreaotvklxpizjas ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156905.2905533-295-126317383633202/async_wrapper.py j544925718064 300 /home/zuul/.ansible/tmp/ansible-tmp-1764156905.2905533-295-126317383633202/AnsiballZ_edpm_os_net_config.py _'
Nov 26 11:35:05 compute-0 sudo[51749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:35:05 compute-0 ansible-async_wrapper.py[51751]: Invoked with j544925718064 300 /home/zuul/.ansible/tmp/ansible-tmp-1764156905.2905533-295-126317383633202/AnsiballZ_edpm_os_net_config.py _
Nov 26 11:35:05 compute-0 ansible-async_wrapper.py[51754]: Starting module and watcher
Nov 26 11:35:05 compute-0 ansible-async_wrapper.py[51754]: Start watching 51755 (300)
Nov 26 11:35:05 compute-0 ansible-async_wrapper.py[51755]: Start module (51755)
Nov 26 11:35:05 compute-0 ansible-async_wrapper.py[51751]: Return async_wrapper task started.
Nov 26 11:35:05 compute-0 sudo[51749]: pam_unix(sudo:session): session closed for user root
Nov 26 11:35:06 compute-0 python3.9[51756]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Nov 26 11:35:06 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Nov 26 11:35:06 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Nov 26 11:35:06 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Nov 26 11:35:06 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Nov 26 11:35:06 compute-0 kernel: cfg80211: failed to load regulatory.db
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.3338] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51757 uid=0 result="success"
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.3353] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51757 uid=0 result="success"
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.3725] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.3727] audit: op="connection-add" uuid="4780f9a5-77f4-4ad3-bce6-90d42244b677" name="br-ex-br" pid=51757 uid=0 result="success"
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.3737] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.3739] audit: op="connection-add" uuid="25d5ebf2-2672-4405-8e77-01197c117e34" name="br-ex-port" pid=51757 uid=0 result="success"
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.3748] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.3750] audit: op="connection-add" uuid="7b08d278-fd75-45fa-87da-7a14154ba681" name="eth1-port" pid=51757 uid=0 result="success"
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.3761] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.3762] audit: op="connection-add" uuid="28640b29-8358-46dd-acb7-85d6c8f81804" name="vlan20-port" pid=51757 uid=0 result="success"
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.3772] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.3774] audit: op="connection-add" uuid="b48766f2-c3e1-4b39-8875-79b83e55f933" name="vlan21-port" pid=51757 uid=0 result="success"
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.3783] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.3784] audit: op="connection-add" uuid="2e7a410a-8624-470c-90e1-781bcc4a2af6" name="vlan22-port" pid=51757 uid=0 result="success"
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.3793] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.3794] audit: op="connection-add" uuid="68db492c-df5b-4f57-a7d0-b571dce0e996" name="vlan23-port" pid=51757 uid=0 result="success"
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.3811] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv4.dhcp-client-id,ipv4.dhcp-timeout,ipv6.routes,ipv6.method,ipv6.addr-gen-mode,ipv6.dhcp-timeout,ipv6.may-fail,connection.timestamp,connection.autoconnect-priority,802-3-ethernet.mtu" pid=51757 uid=0 result="success"
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.3824] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.3825] audit: op="connection-add" uuid="136d9700-3f9d-4193-b0c7-921f76926c2c" name="br-ex-if" pid=51757 uid=0 result="success"
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.3845] audit: op="connection-update" uuid="99a73924-2eda-5c64-a2d8-18ad8013f642" name="ci-private-network" args="ipv4.addresses,ipv4.method,ipv4.dns,ipv4.never-default,ipv4.routing-rules,ipv4.routes,ipv6.addresses,ipv6.method,ipv6.dns,ipv6.routes,ipv6.routing-rules,ipv6.addr-gen-mode,connection.port-type,connection.slave-type,connection.timestamp,connection.master,connection.controller,ovs-external-ids.data,ovs-interface.type" pid=51757 uid=0 result="success"
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.3858] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.3860] audit: op="connection-add" uuid="b9e72e30-be85-4e15-bd5d-3ec18b1f3501" name="vlan20-if" pid=51757 uid=0 result="success"
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.3872] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.3874] audit: op="connection-add" uuid="d953e5eb-aefc-4732-9c41-48c4a026c468" name="vlan21-if" pid=51757 uid=0 result="success"
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.3886] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.3888] audit: op="connection-add" uuid="81b0b568-4584-4874-8af3-9d0b612fa43b" name="vlan22-if" pid=51757 uid=0 result="success"
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.3900] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.3902] audit: op="connection-add" uuid="d32e6988-716b-4a3f-b454-94e300e90760" name="vlan23-if" pid=51757 uid=0 result="success"
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.3910] audit: op="connection-delete" uuid="3c01c958-ae07-3583-8f8f-8a96e3659e99" name="Wired connection 1" pid=51757 uid=0 result="success"
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.3919] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.3927] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.3931] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (4780f9a5-77f4-4ad3-bce6-90d42244b677)
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.3932] audit: op="connection-activate" uuid="4780f9a5-77f4-4ad3-bce6-90d42244b677" name="br-ex-br" pid=51757 uid=0 result="success"
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.3933] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.3939] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.3942] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (25d5ebf2-2672-4405-8e77-01197c117e34)
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.3944] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.3949] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.3952] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (7b08d278-fd75-45fa-87da-7a14154ba681)
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.3954] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.3959] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.3963] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (28640b29-8358-46dd-acb7-85d6c8f81804)
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.3965] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.3970] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.3974] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (b48766f2-c3e1-4b39-8875-79b83e55f933)
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.3976] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.3981] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.3985] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (2e7a410a-8624-470c-90e1-781bcc4a2af6)
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.3987] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.3992] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.3995] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (68db492c-df5b-4f57-a7d0-b571dce0e996)
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.3996] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.3998] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4000] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4005] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4009] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4012] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (136d9700-3f9d-4193-b0c7-921f76926c2c)
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4013] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4016] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4018] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4019] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4021] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4029] device (eth1): disconnecting for new activation request.
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4030] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4032] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4034] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4035] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4038] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4042] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4045] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (b9e72e30-be85-4e15-bd5d-3ec18b1f3501)
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4046] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4050] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4052] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4053] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4055] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4059] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4063] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (d953e5eb-aefc-4732-9c41-48c4a026c468)
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4064] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4067] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4069] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4070] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4072] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4076] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4080] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (81b0b568-4584-4874-8af3-9d0b612fa43b)
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4081] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4083] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4085] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4087] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4089] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4092] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4097] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (d32e6988-716b-4a3f-b454-94e300e90760)
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4098] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4100] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4102] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4103] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4104] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4114] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv4.dhcp-client-id,ipv4.dhcp-timeout,ipv6.routes,ipv6.method,ipv6.addr-gen-mode,ipv6.may-fail,connection.autoconnect-priority,802-3-ethernet.mtu" pid=51757 uid=0 result="success"
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4116] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4119] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4121] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4127] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4130] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4132] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4136] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4137] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4141] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4144] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4148] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 kernel: ovs-system: entered promiscuous mode
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4151] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 systemd-udevd[51762]: Network interface NamePolicy= disabled on kernel command line.
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4158] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4163] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 kernel: Timeout policy base is empty
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4167] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4168] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4172] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4176] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4179] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4180] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4185] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4188] dhcp4 (eth0): canceled DHCP transaction
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4189] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4189] dhcp4 (eth0): state changed no lease
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4190] dhcp6 (eth0): canceled DHCP transaction
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4190] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4191] dhcp6 (eth0): state changed no lease
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4195] dhcp4 (eth0): activation: beginning transaction (no timeout)
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4203] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4209] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51757 uid=0 result="fail" reason="Device is not activated"
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4214] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4220] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Nov 26 11:35:07 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4242] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4255] dhcp4 (eth0): state changed new lease, address=192.168.26.91
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4260] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4285] device (eth1): disconnecting for new activation request.
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4285] audit: op="connection-activate" uuid="99a73924-2eda-5c64-a2d8-18ad8013f642" name="ci-private-network" pid=51757 uid=0 result="success"
Nov 26 11:35:07 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4311] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51757 uid=0 result="success"
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4312] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4402] device (eth1): Activation: starting connection 'ci-private-network' (99a73924-2eda-5c64-a2d8-18ad8013f642)
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4411] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4413] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4417] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4418] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4419] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4420] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4421] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4425] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4427] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4437] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4444] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4448] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Nov 26 11:35:07 compute-0 kernel: br-ex: entered promiscuous mode
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4453] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4457] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4460] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4463] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4467] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4470] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4473] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4489] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4492] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4495] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4498] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4501] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4505] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4509] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4530] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4531] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4535] device (eth1): Activation: successful, device activated.
Nov 26 11:35:07 compute-0 kernel: vlan22: entered promiscuous mode
Nov 26 11:35:07 compute-0 systemd-udevd[51763]: Network interface NamePolicy= disabled on kernel command line.
Nov 26 11:35:07 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4573] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4580] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 kernel: vlan20: entered promiscuous mode
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4641] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4654] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4663] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4669] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4685] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4713] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4714] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 kernel: vlan23: entered promiscuous mode
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4729] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4735] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 26 11:35:07 compute-0 kernel: vlan21: entered promiscuous mode
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4784] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4797] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4805] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4863] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4863] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4869] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4878] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4885] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4893] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4900] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4935] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4952] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4958] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 26 11:35:07 compute-0 NetworkManager[48976]: <info>  [1764156907.4967] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 26 11:35:08 compute-0 NetworkManager[48976]: <info>  [1764156908.5948] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51757 uid=0 result="success"
Nov 26 11:35:08 compute-0 NetworkManager[48976]: <info>  [1764156908.6933] checkpoint[0x556dfc5f8950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Nov 26 11:35:08 compute-0 NetworkManager[48976]: <info>  [1764156908.6934] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51757 uid=0 result="success"
Nov 26 11:35:08 compute-0 NetworkManager[48976]: <info>  [1764156908.7934] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51757 uid=0 result="success"
Nov 26 11:35:08 compute-0 NetworkManager[48976]: <info>  [1764156908.7942] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51757 uid=0 result="success"
Nov 26 11:35:08 compute-0 NetworkManager[48976]: <info>  [1764156908.9428] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51757 uid=0 result="success"
Nov 26 11:35:09 compute-0 NetworkManager[48976]: <info>  [1764156909.0628] checkpoint[0x556dfc5f8a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Nov 26 11:35:09 compute-0 NetworkManager[48976]: <info>  [1764156909.0631] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51757 uid=0 result="success"
Nov 26 11:35:09 compute-0 NetworkManager[48976]: <info>  [1764156909.3047] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/3" pid=51757 uid=0 result="success"
Nov 26 11:35:09 compute-0 NetworkManager[48976]: <info>  [1764156909.3057] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/3" pid=51757 uid=0 result="success"
Nov 26 11:35:09 compute-0 sudo[52110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzxredgoioulccfxdfriwyhvckfugojk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156908.986632-295-33413934596704/AnsiballZ_async_status.py'
Nov 26 11:35:09 compute-0 sudo[52110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:35:09 compute-0 NetworkManager[48976]: <info>  [1764156909.4717] audit: op="networking-control" arg="global-dns-configuration" pid=51757 uid=0 result="success"
Nov 26 11:35:09 compute-0 NetworkManager[48976]: <info>  [1764156909.4731] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /etc/NetworkManager/conf.d/99-cloud-init.conf)
Nov 26 11:35:09 compute-0 NetworkManager[48976]: <info>  [1764156909.4737] audit: op="networking-control" arg="global-dns-configuration" pid=51757 uid=0 result="success"
Nov 26 11:35:09 compute-0 NetworkManager[48976]: <info>  [1764156909.4757] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/3" pid=51757 uid=0 result="success"
Nov 26 11:35:09 compute-0 python3.9[52112]: ansible-ansible.legacy.async_status Invoked with jid=j544925718064.51751 mode=status _async_dir=/root/.ansible_async
Nov 26 11:35:09 compute-0 sudo[52110]: pam_unix(sudo:session): session closed for user root
Nov 26 11:35:09 compute-0 NetworkManager[48976]: <info>  [1764156909.5963] checkpoint[0x556dfc5f8af0]: destroy /org/freedesktop/NetworkManager/Checkpoint/3
Nov 26 11:35:09 compute-0 NetworkManager[48976]: <info>  [1764156909.5969] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/3" pid=51757 uid=0 result="success"
Nov 26 11:35:09 compute-0 ansible-async_wrapper.py[51755]: Module complete (51755)
Nov 26 11:35:10 compute-0 ansible-async_wrapper.py[51754]: Done in kid B.
Nov 26 11:35:12 compute-0 sudo[52214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngetmchalohbgryicgbfdyyrcrztbhyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156908.986632-295-33413934596704/AnsiballZ_async_status.py'
Nov 26 11:35:12 compute-0 sudo[52214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:35:12 compute-0 python3.9[52216]: ansible-ansible.legacy.async_status Invoked with jid=j544925718064.51751 mode=status _async_dir=/root/.ansible_async
Nov 26 11:35:12 compute-0 sudo[52214]: pam_unix(sudo:session): session closed for user root
Nov 26 11:35:13 compute-0 sudo[52314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijzpbjvgwmnnpjhphenuqanjvdevhldp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156908.986632-295-33413934596704/AnsiballZ_async_status.py'
Nov 26 11:35:13 compute-0 sudo[52314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:35:13 compute-0 python3.9[52316]: ansible-ansible.legacy.async_status Invoked with jid=j544925718064.51751 mode=cleanup _async_dir=/root/.ansible_async
Nov 26 11:35:13 compute-0 sudo[52314]: pam_unix(sudo:session): session closed for user root
Nov 26 11:35:13 compute-0 sudo[52466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zncgxocliamaclcatdsrxnvrmetmezgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156913.3084602-322-1583670264826/AnsiballZ_stat.py'
Nov 26 11:35:13 compute-0 sudo[52466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:35:13 compute-0 python3.9[52468]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:35:13 compute-0 sudo[52466]: pam_unix(sudo:session): session closed for user root
Nov 26 11:35:13 compute-0 sudo[52589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhwrzsslzgvcbioiclbexnebpczqubeo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156913.3084602-322-1583670264826/AnsiballZ_copy.py'
Nov 26 11:35:13 compute-0 sudo[52589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:35:14 compute-0 python3.9[52591]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764156913.3084602-322-1583670264826/.source.returncode _original_basename=.8xj78ese follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:35:14 compute-0 sudo[52589]: pam_unix(sudo:session): session closed for user root
Nov 26 11:35:14 compute-0 sudo[52741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iupszrwxvwhiayphtzyciysdqblptxha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156914.1774676-338-114620521286860/AnsiballZ_stat.py'
Nov 26 11:35:14 compute-0 sudo[52741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:35:14 compute-0 python3.9[52743]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:35:14 compute-0 sudo[52741]: pam_unix(sudo:session): session closed for user root
Nov 26 11:35:14 compute-0 sudo[52864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-earvglctfsblxroujsuhtknmpbauqllo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156914.1774676-338-114620521286860/AnsiballZ_copy.py'
Nov 26 11:35:14 compute-0 sudo[52864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:35:14 compute-0 python3.9[52866]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764156914.1774676-338-114620521286860/.source.cfg _original_basename=.cblqrk8k follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:35:14 compute-0 sudo[52864]: pam_unix(sudo:session): session closed for user root
Nov 26 11:35:15 compute-0 sudo[53016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrickxfgayryxyflfbayhvetpgoofoku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156914.9995546-353-166877189579473/AnsiballZ_systemd.py'
Nov 26 11:35:15 compute-0 sudo[53016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:35:15 compute-0 python3.9[53018]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 11:35:15 compute-0 systemd[1]: Reloading Network Manager...
Nov 26 11:35:15 compute-0 NetworkManager[48976]: <info>  [1764156915.4748] audit: op="reload" arg="0" pid=53022 uid=0 result="success"
Nov 26 11:35:15 compute-0 NetworkManager[48976]: <info>  [1764156915.4753] config: signal: SIGHUP,config-files,values,values-user,no-auto-default,dns-mode (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /etc/NetworkManager/conf.d/99-cloud-init.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Nov 26 11:35:15 compute-0 NetworkManager[48976]: <info>  [1764156915.4754] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 26 11:35:15 compute-0 systemd[1]: Reloaded Network Manager.
Nov 26 11:35:15 compute-0 sudo[53016]: pam_unix(sudo:session): session closed for user root
Nov 26 11:35:15 compute-0 sshd-session[44980]: Connection closed by 192.168.122.30 port 54188
Nov 26 11:35:15 compute-0 sshd-session[44977]: pam_unix(sshd:session): session closed for user zuul
Nov 26 11:35:15 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Nov 26 11:35:15 compute-0 systemd[1]: session-9.scope: Consumed 35.299s CPU time.
Nov 26 11:35:15 compute-0 systemd-logind[744]: Session 9 logged out. Waiting for processes to exit.
Nov 26 11:35:15 compute-0 systemd-logind[744]: Removed session 9.
Nov 26 11:35:19 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 26 11:35:20 compute-0 sshd-session[53055]: Accepted publickey for zuul from 192.168.122.30 port 53426 ssh2: ECDSA SHA256:Ri1FttUDYah1mI7cQP4QF8tE4Jqe/KzhJvhCRDggR5A
Nov 26 11:35:20 compute-0 systemd-logind[744]: New session 10 of user zuul.
Nov 26 11:35:20 compute-0 systemd[1]: Started Session 10 of User zuul.
Nov 26 11:35:20 compute-0 sshd-session[53055]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 26 11:35:21 compute-0 python3.9[53208]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 11:35:22 compute-0 python3.9[53362]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 11:35:23 compute-0 python3.9[53556]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:35:23 compute-0 sshd-session[53058]: Connection closed by 192.168.122.30 port 53426
Nov 26 11:35:23 compute-0 sshd-session[53055]: pam_unix(sshd:session): session closed for user zuul
Nov 26 11:35:23 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Nov 26 11:35:23 compute-0 systemd[1]: session-10.scope: Consumed 1.618s CPU time.
Nov 26 11:35:23 compute-0 systemd-logind[744]: Session 10 logged out. Waiting for processes to exit.
Nov 26 11:35:23 compute-0 systemd-logind[744]: Removed session 10.
Nov 26 11:35:25 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 26 11:35:28 compute-0 sshd-session[53584]: Accepted publickey for zuul from 192.168.122.30 port 36698 ssh2: ECDSA SHA256:Ri1FttUDYah1mI7cQP4QF8tE4Jqe/KzhJvhCRDggR5A
Nov 26 11:35:28 compute-0 systemd-logind[744]: New session 11 of user zuul.
Nov 26 11:35:28 compute-0 systemd[1]: Started Session 11 of User zuul.
Nov 26 11:35:28 compute-0 sshd-session[53584]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 26 11:35:29 compute-0 python3.9[53738]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 11:35:30 compute-0 python3.9[53892]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 11:35:30 compute-0 sudo[54046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cisrdlsmajuqbqoqafocgojuxeckxdkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156930.6439075-40-177372598580093/AnsiballZ_setup.py'
Nov 26 11:35:30 compute-0 sudo[54046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:35:31 compute-0 python3.9[54048]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 11:35:31 compute-0 sudo[54046]: pam_unix(sudo:session): session closed for user root
Nov 26 11:35:31 compute-0 sudo[54130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxymhjtbokteydtmbhutqkfmtlexdanj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156930.6439075-40-177372598580093/AnsiballZ_dnf.py'
Nov 26 11:35:31 compute-0 sudo[54130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:35:31 compute-0 python3.9[54132]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 11:35:32 compute-0 sudo[54130]: pam_unix(sudo:session): session closed for user root
Nov 26 11:35:33 compute-0 sudo[54284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cywbozzwvqzenhjwbzetycoirwaxuvho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156932.902312-52-40643177969994/AnsiballZ_setup.py'
Nov 26 11:35:33 compute-0 sudo[54284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:35:33 compute-0 python3.9[54286]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 11:35:33 compute-0 sudo[54284]: pam_unix(sudo:session): session closed for user root
Nov 26 11:35:34 compute-0 sudo[54479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usnascfyxjzysrwcpiirdcgtkyifptfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156933.6989799-63-39622951638123/AnsiballZ_file.py'
Nov 26 11:35:34 compute-0 sudo[54479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:35:34 compute-0 python3.9[54481]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:35:34 compute-0 sudo[54479]: pam_unix(sudo:session): session closed for user root
Nov 26 11:35:34 compute-0 sudo[54631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjhitffosbzsssfbjcvxvclxnhteujrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156934.302638-71-272698529896076/AnsiballZ_command.py'
Nov 26 11:35:34 compute-0 sudo[54631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:35:34 compute-0 python3.9[54633]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:35:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-metacopy\x2dcheck3377554936-merged.mount: Deactivated successfully.
Nov 26 11:35:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-opaque\x2dbug\x2dcheck1276063716-merged.mount: Deactivated successfully.
Nov 26 11:35:34 compute-0 podman[54634]: 2025-11-26 11:35:34.799061064 +0000 UTC m=+0.025740820 system refresh
Nov 26 11:35:34 compute-0 sudo[54631]: pam_unix(sudo:session): session closed for user root
Nov 26 11:35:35 compute-0 sudo[54792]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpftzfagpsjrgsjyoerfzpuqpnhfdiot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156934.9391067-79-215811389531024/AnsiballZ_stat.py'
Nov 26 11:35:35 compute-0 sudo[54792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:35:35 compute-0 python3.9[54794]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:35:35 compute-0 sudo[54792]: pam_unix(sudo:session): session closed for user root
Nov 26 11:35:35 compute-0 sudo[54916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajqogecixopsalattbssultltxxbdhba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156934.9391067-79-215811389531024/AnsiballZ_copy.py'
Nov 26 11:35:35 compute-0 sudo[54916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:35:35 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 11:35:35 compute-0 python3.9[54918]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764156934.9391067-79-215811389531024/.source.json follow=False _original_basename=podman_network_config.j2 checksum=a06af660b8a67960914a1dd359f708b337dd1ae1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:35:35 compute-0 sudo[54916]: pam_unix(sudo:session): session closed for user root
Nov 26 11:35:36 compute-0 sudo[55068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rezrsihsrpebbttvjzeymjwtposlywma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156936.068408-94-153610579165916/AnsiballZ_stat.py'
Nov 26 11:35:36 compute-0 sudo[55068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:35:36 compute-0 python3.9[55070]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:35:36 compute-0 sudo[55068]: pam_unix(sudo:session): session closed for user root
Nov 26 11:35:36 compute-0 sudo[55191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-doopegmwkeqvmbofrwjyfoholjncjtud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156936.068408-94-153610579165916/AnsiballZ_copy.py'
Nov 26 11:35:36 compute-0 sudo[55191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:35:36 compute-0 python3.9[55193]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764156936.068408-94-153610579165916/.source.conf follow=False _original_basename=registries.conf.j2 checksum=c2a85b7389d30a5066b1ae0058c9a8ae1bc25688 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:35:36 compute-0 sudo[55191]: pam_unix(sudo:session): session closed for user root
Nov 26 11:35:37 compute-0 sudo[55343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uiadjwbwtjogcywsapzloerxzeinjeln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156937.033013-110-73302126434412/AnsiballZ_ini_file.py'
Nov 26 11:35:37 compute-0 sudo[55343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:35:37 compute-0 python3.9[55345]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:35:37 compute-0 sudo[55343]: pam_unix(sudo:session): session closed for user root
Nov 26 11:35:37 compute-0 sudo[55495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmujohfoyvefujzikgwmpbnwjircvdcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156937.6095595-110-187232043045891/AnsiballZ_ini_file.py'
Nov 26 11:35:37 compute-0 sudo[55495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:35:37 compute-0 python3.9[55497]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:35:37 compute-0 sudo[55495]: pam_unix(sudo:session): session closed for user root
Nov 26 11:35:38 compute-0 sudo[55647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvpfkqmemyuzhdhhismmxtbkswygxoin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156938.0352163-110-174764812146101/AnsiballZ_ini_file.py'
Nov 26 11:35:38 compute-0 sudo[55647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:35:38 compute-0 python3.9[55649]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:35:38 compute-0 sudo[55647]: pam_unix(sudo:session): session closed for user root
Nov 26 11:35:38 compute-0 sudo[55800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdnvxmwanqulqkuoujdzwwfowroqplbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156938.4729428-110-47530005196310/AnsiballZ_ini_file.py'
Nov 26 11:35:38 compute-0 sudo[55800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:35:38 compute-0 python3.9[55802]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:35:38 compute-0 sudo[55800]: pam_unix(sudo:session): session closed for user root
Nov 26 11:35:39 compute-0 sudo[55952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shmssshixrdfauzkrscbxsqikperotwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156939.0101042-141-46872879917424/AnsiballZ_dnf.py'
Nov 26 11:35:39 compute-0 sudo[55952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:35:39 compute-0 python3.9[55954]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 11:35:40 compute-0 sudo[55952]: pam_unix(sudo:session): session closed for user root
Nov 26 11:35:40 compute-0 sudo[56105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsvnfejnvgwxhcpbyrtqokrrunowwfct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156940.649466-152-163082496953337/AnsiballZ_setup.py'
Nov 26 11:35:40 compute-0 sudo[56105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:35:41 compute-0 python3.9[56107]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 11:35:41 compute-0 sudo[56105]: pam_unix(sudo:session): session closed for user root
Nov 26 11:35:41 compute-0 sudo[56259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjttdwlihppywsaiokqdarljslyvndtp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156941.225793-160-120829817191150/AnsiballZ_stat.py'
Nov 26 11:35:41 compute-0 sudo[56259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:35:41 compute-0 python3.9[56261]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 11:35:41 compute-0 sudo[56259]: pam_unix(sudo:session): session closed for user root
Nov 26 11:35:41 compute-0 sudo[56411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imfgarvfqvtcesedkhqoknejjrbgrweu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156941.6974437-169-212037230255443/AnsiballZ_stat.py'
Nov 26 11:35:41 compute-0 sudo[56411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:35:42 compute-0 python3.9[56413]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 11:35:42 compute-0 sudo[56411]: pam_unix(sudo:session): session closed for user root
Nov 26 11:35:42 compute-0 sudo[56563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkikyudrzrzvepzmqqqpvpspegldrbci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156942.2206662-179-259118335634836/AnsiballZ_command.py'
Nov 26 11:35:42 compute-0 sudo[56563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:35:42 compute-0 python3.9[56565]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:35:42 compute-0 sudo[56563]: pam_unix(sudo:session): session closed for user root
Nov 26 11:35:43 compute-0 sudo[56716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zntftimgxrrljmrhuzgcenhfcifibacp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156942.7703652-189-156129091212148/AnsiballZ_service_facts.py'
Nov 26 11:35:43 compute-0 sudo[56716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:35:43 compute-0 python3.9[56718]: ansible-service_facts Invoked
Nov 26 11:35:43 compute-0 network[56735]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 26 11:35:43 compute-0 network[56736]: 'network-scripts' will be removed from distribution in near future.
Nov 26 11:35:43 compute-0 network[56737]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 26 11:35:44 compute-0 sudo[56716]: pam_unix(sudo:session): session closed for user root
Nov 26 11:35:45 compute-0 sudo[57020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ediigvdkvuyqbsflszhjyecbhifwhbnw ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1764156945.5250688-204-214265570687470/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1764156945.5250688-204-214265570687470/args'
Nov 26 11:35:45 compute-0 sudo[57020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:35:45 compute-0 sudo[57020]: pam_unix(sudo:session): session closed for user root
Nov 26 11:35:46 compute-0 sudo[57187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhvrawqyaxlacrxbscgflugqtmfglvpq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156945.9951508-215-205793650032819/AnsiballZ_dnf.py'
Nov 26 11:35:46 compute-0 sudo[57187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:35:46 compute-0 python3.9[57189]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 11:35:47 compute-0 sudo[57187]: pam_unix(sudo:session): session closed for user root
Nov 26 11:35:48 compute-0 sudo[57340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zekdinwzxgzyokacsxdbnygdzrudjmfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156947.6419399-228-247383305193033/AnsiballZ_package_facts.py'
Nov 26 11:35:48 compute-0 sudo[57340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:35:48 compute-0 python3.9[57342]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 26 11:35:48 compute-0 sudo[57340]: pam_unix(sudo:session): session closed for user root
Nov 26 11:35:49 compute-0 sudo[57492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhpiqssiaginaolbjfwwknowexyshloo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156948.8265285-238-202279235073247/AnsiballZ_stat.py'
Nov 26 11:35:49 compute-0 sudo[57492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:35:49 compute-0 python3.9[57494]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:35:49 compute-0 sudo[57492]: pam_unix(sudo:session): session closed for user root
Nov 26 11:35:49 compute-0 sudo[57617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbbywunwvkykuvhawgpcdrkjkcelhlvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156948.8265285-238-202279235073247/AnsiballZ_copy.py'
Nov 26 11:35:49 compute-0 sudo[57617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:35:49 compute-0 python3.9[57619]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764156948.8265285-238-202279235073247/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:35:49 compute-0 sudo[57617]: pam_unix(sudo:session): session closed for user root
Nov 26 11:35:50 compute-0 sudo[57771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cotphvrqvrrgmrdubwxemqlivefosahe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156949.8550444-253-99665956978759/AnsiballZ_stat.py'
Nov 26 11:35:50 compute-0 sudo[57771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:35:50 compute-0 python3.9[57773]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:35:50 compute-0 sudo[57771]: pam_unix(sudo:session): session closed for user root
Nov 26 11:35:50 compute-0 sudo[57896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwmfyumpzzdbqpswdsnlyosulbjacecp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156949.8550444-253-99665956978759/AnsiballZ_copy.py'
Nov 26 11:35:50 compute-0 sudo[57896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:35:50 compute-0 python3.9[57898]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764156949.8550444-253-99665956978759/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:35:50 compute-0 sudo[57896]: pam_unix(sudo:session): session closed for user root
Nov 26 11:35:51 compute-0 sudo[58050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxpjmpdmrquupwkhnyksqwootlblqsqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156951.0061476-274-177486428973669/AnsiballZ_lineinfile.py'
Nov 26 11:35:51 compute-0 sudo[58050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:35:51 compute-0 python3.9[58052]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:35:51 compute-0 sudo[58050]: pam_unix(sudo:session): session closed for user root
Nov 26 11:35:52 compute-0 sudo[58204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgkduamhvdykobeaxdvmtxzffbradzcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156951.8915536-289-93609606919961/AnsiballZ_setup.py'
Nov 26 11:35:52 compute-0 sudo[58204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:35:52 compute-0 python3.9[58206]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 11:35:52 compute-0 sudo[58204]: pam_unix(sudo:session): session closed for user root
Nov 26 11:35:52 compute-0 sudo[58288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgqvjxklbkqrmobqiqvuazdplkbnolud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156951.8915536-289-93609606919961/AnsiballZ_systemd.py'
Nov 26 11:35:52 compute-0 sudo[58288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:35:53 compute-0 python3.9[58290]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 11:35:53 compute-0 sudo[58288]: pam_unix(sudo:session): session closed for user root
Nov 26 11:35:53 compute-0 sudo[58442]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmldgewmalshrmhmfbjlrthwfzqcnnfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156953.580767-305-147647302107979/AnsiballZ_setup.py'
Nov 26 11:35:53 compute-0 sudo[58442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:35:54 compute-0 python3.9[58444]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 11:35:54 compute-0 sudo[58442]: pam_unix(sudo:session): session closed for user root
Nov 26 11:35:54 compute-0 sudo[58526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kepqypiefrlzyjyebyncwrfenfykqeyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156953.580767-305-147647302107979/AnsiballZ_systemd.py'
Nov 26 11:35:54 compute-0 sudo[58526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:35:54 compute-0 python3.9[58528]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 11:35:54 compute-0 chronyd[746]: chronyd exiting
Nov 26 11:35:54 compute-0 systemd[1]: Stopping NTP client/server...
Nov 26 11:35:54 compute-0 systemd[1]: chronyd.service: Deactivated successfully.
Nov 26 11:35:54 compute-0 systemd[1]: Stopped NTP client/server.
Nov 26 11:35:54 compute-0 systemd[1]: Starting NTP client/server...
Nov 26 11:35:54 compute-0 chronyd[58536]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 26 11:35:54 compute-0 chronyd[58536]: Frequency -11.274 +/- 0.628 ppm read from /var/lib/chrony/drift
Nov 26 11:35:54 compute-0 chronyd[58536]: Loaded seccomp filter (level 2)
Nov 26 11:35:54 compute-0 systemd[1]: Started NTP client/server.
Nov 26 11:35:54 compute-0 sudo[58526]: pam_unix(sudo:session): session closed for user root
Nov 26 11:35:55 compute-0 sshd-session[53587]: Connection closed by 192.168.122.30 port 36698
Nov 26 11:35:55 compute-0 sshd-session[53584]: pam_unix(sshd:session): session closed for user zuul
Nov 26 11:35:55 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Nov 26 11:35:55 compute-0 systemd[1]: session-11.scope: Consumed 17.548s CPU time.
Nov 26 11:35:55 compute-0 systemd-logind[744]: Session 11 logged out. Waiting for processes to exit.
Nov 26 11:35:55 compute-0 systemd-logind[744]: Removed session 11.
Nov 26 11:36:01 compute-0 sshd-session[58562]: Accepted publickey for zuul from 192.168.122.30 port 42474 ssh2: ECDSA SHA256:Ri1FttUDYah1mI7cQP4QF8tE4Jqe/KzhJvhCRDggR5A
Nov 26 11:36:01 compute-0 systemd-logind[744]: New session 12 of user zuul.
Nov 26 11:36:01 compute-0 systemd[1]: Started Session 12 of User zuul.
Nov 26 11:36:01 compute-0 sshd-session[58562]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 26 11:36:01 compute-0 sudo[58715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfnmilcyttjxvrswgnnhkntmpchhxnoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156961.2546494-22-253354038097644/AnsiballZ_file.py'
Nov 26 11:36:01 compute-0 sudo[58715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:36:01 compute-0 python3.9[58717]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:36:01 compute-0 sudo[58715]: pam_unix(sudo:session): session closed for user root
Nov 26 11:36:02 compute-0 sudo[58867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oavmmgdfywkbtstpbhcbefrszrpgnkdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156961.8972235-34-368939995951/AnsiballZ_stat.py'
Nov 26 11:36:02 compute-0 sudo[58867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:36:02 compute-0 python3.9[58869]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:36:02 compute-0 sudo[58867]: pam_unix(sudo:session): session closed for user root
Nov 26 11:36:02 compute-0 sudo[58990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awezazrpcofkrfcifacfwlcivrabdskz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156961.8972235-34-368939995951/AnsiballZ_copy.py'
Nov 26 11:36:02 compute-0 sudo[58990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:36:02 compute-0 python3.9[58992]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764156961.8972235-34-368939995951/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:36:02 compute-0 sudo[58990]: pam_unix(sudo:session): session closed for user root
Nov 26 11:36:03 compute-0 sshd-session[58565]: Connection closed by 192.168.122.30 port 42474
Nov 26 11:36:03 compute-0 sshd-session[58562]: pam_unix(sshd:session): session closed for user zuul
Nov 26 11:36:03 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Nov 26 11:36:03 compute-0 systemd[1]: session-12.scope: Consumed 1.083s CPU time.
Nov 26 11:36:03 compute-0 systemd-logind[744]: Session 12 logged out. Waiting for processes to exit.
Nov 26 11:36:03 compute-0 systemd-logind[744]: Removed session 12.
Nov 26 11:36:08 compute-0 sshd-session[59017]: Accepted publickey for zuul from 192.168.122.30 port 55864 ssh2: ECDSA SHA256:Ri1FttUDYah1mI7cQP4QF8tE4Jqe/KzhJvhCRDggR5A
Nov 26 11:36:08 compute-0 systemd-logind[744]: New session 13 of user zuul.
Nov 26 11:36:08 compute-0 systemd[1]: Started Session 13 of User zuul.
Nov 26 11:36:08 compute-0 sshd-session[59017]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 26 11:36:09 compute-0 python3.9[59170]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 11:36:09 compute-0 sudo[59324]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlujhkaivunhvpckpiudfmemgtjdijdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156969.5374107-33-101805005621513/AnsiballZ_file.py'
Nov 26 11:36:09 compute-0 sudo[59324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:36:10 compute-0 python3.9[59326]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:36:10 compute-0 sudo[59324]: pam_unix(sudo:session): session closed for user root
Nov 26 11:36:10 compute-0 sudo[59499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epvaqpnsgvnijuwywzgavuwcyjhbrons ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156970.1434743-41-238710764076289/AnsiballZ_stat.py'
Nov 26 11:36:10 compute-0 sudo[59499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:36:10 compute-0 python3.9[59501]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:36:10 compute-0 sudo[59499]: pam_unix(sudo:session): session closed for user root
Nov 26 11:36:10 compute-0 sudo[59622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmqvztqklxalozvzixawbpdpibdjnbrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156970.1434743-41-238710764076289/AnsiballZ_copy.py'
Nov 26 11:36:10 compute-0 sudo[59622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:36:11 compute-0 python3.9[59624]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1764156970.1434743-41-238710764076289/.source.json _original_basename=.ef38pfkz follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:36:11 compute-0 sudo[59622]: pam_unix(sudo:session): session closed for user root
Nov 26 11:36:11 compute-0 sudo[59774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryzlzdkrlitrefaoxqywhnmcxxrzhrsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156971.3878071-64-262626022858504/AnsiballZ_stat.py'
Nov 26 11:36:11 compute-0 sudo[59774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:36:11 compute-0 python3.9[59776]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:36:11 compute-0 sudo[59774]: pam_unix(sudo:session): session closed for user root
Nov 26 11:36:11 compute-0 sudo[59897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhmbousbxjmxerddcoayzhduhnoptgll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156971.3878071-64-262626022858504/AnsiballZ_copy.py'
Nov 26 11:36:11 compute-0 sudo[59897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:36:12 compute-0 python3.9[59899]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764156971.3878071-64-262626022858504/.source _original_basename=.ie6af18f follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:36:12 compute-0 sudo[59897]: pam_unix(sudo:session): session closed for user root
Nov 26 11:36:12 compute-0 sudo[60049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wscxxppzlwbtjnscmetolymiailgvnml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156972.2560906-80-181103507788646/AnsiballZ_file.py'
Nov 26 11:36:12 compute-0 sudo[60049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:36:12 compute-0 python3.9[60051]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:36:12 compute-0 sudo[60049]: pam_unix(sudo:session): session closed for user root
Nov 26 11:36:12 compute-0 sudo[60201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzywnuquiofngvggvhpiijlmmfvdyenz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156972.7186542-88-261070512302505/AnsiballZ_stat.py'
Nov 26 11:36:12 compute-0 sudo[60201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:36:13 compute-0 python3.9[60203]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:36:13 compute-0 sudo[60201]: pam_unix(sudo:session): session closed for user root
Nov 26 11:36:13 compute-0 sudo[60324]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgbyiqhwpbyftwpprguigicgpibtrgrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156972.7186542-88-261070512302505/AnsiballZ_copy.py'
Nov 26 11:36:13 compute-0 sudo[60324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:36:13 compute-0 python3.9[60326]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764156972.7186542-88-261070512302505/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:36:13 compute-0 sudo[60324]: pam_unix(sudo:session): session closed for user root
Nov 26 11:36:13 compute-0 sudo[60476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpxyacnsmcbgfeeelerfavpaanrdummb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156973.5242987-88-75863691094805/AnsiballZ_stat.py'
Nov 26 11:36:13 compute-0 sudo[60476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:36:13 compute-0 python3.9[60478]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:36:13 compute-0 sudo[60476]: pam_unix(sudo:session): session closed for user root
Nov 26 11:36:14 compute-0 sudo[60599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqcwhkfnqdepdqyvxelkvgbqoifhzgdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156973.5242987-88-75863691094805/AnsiballZ_copy.py'
Nov 26 11:36:14 compute-0 sudo[60599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:36:14 compute-0 python3.9[60601]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764156973.5242987-88-75863691094805/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:36:14 compute-0 sudo[60599]: pam_unix(sudo:session): session closed for user root
Nov 26 11:36:14 compute-0 sudo[60751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrgrjecsoqwbnapctgccndihssxleeyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156974.327949-117-33626557957559/AnsiballZ_file.py'
Nov 26 11:36:14 compute-0 sudo[60751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:36:14 compute-0 python3.9[60753]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:36:14 compute-0 sudo[60751]: pam_unix(sudo:session): session closed for user root
Nov 26 11:36:14 compute-0 sudo[60903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abqoopzpgdkepybcfnrwvwyuuyvjvxke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156974.761277-125-167578688745088/AnsiballZ_stat.py'
Nov 26 11:36:14 compute-0 sudo[60903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:36:15 compute-0 python3.9[60905]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:36:15 compute-0 sudo[60903]: pam_unix(sudo:session): session closed for user root
Nov 26 11:36:15 compute-0 sudo[61026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fravvthsudvqsnkkwbpxrjaivdcsnqbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156974.761277-125-167578688745088/AnsiballZ_copy.py'
Nov 26 11:36:15 compute-0 sudo[61026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:36:15 compute-0 python3.9[61028]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764156974.761277-125-167578688745088/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:36:15 compute-0 sudo[61026]: pam_unix(sudo:session): session closed for user root
Nov 26 11:36:15 compute-0 sudo[61178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rynrxbaxeloildklnyizasahsfrgncbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156975.6104782-140-140733119537460/AnsiballZ_stat.py'
Nov 26 11:36:15 compute-0 sudo[61178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:36:15 compute-0 python3.9[61180]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:36:15 compute-0 sudo[61178]: pam_unix(sudo:session): session closed for user root
Nov 26 11:36:16 compute-0 sudo[61301]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwbdvvwhxseziizdempxrhxaycezstwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156975.6104782-140-140733119537460/AnsiballZ_copy.py'
Nov 26 11:36:16 compute-0 sudo[61301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:36:16 compute-0 python3.9[61303]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764156975.6104782-140-140733119537460/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:36:16 compute-0 sudo[61301]: pam_unix(sudo:session): session closed for user root
Nov 26 11:36:16 compute-0 sudo[61453]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwavtfvaixkjciqcluxvrixfbvygaghq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156976.4724824-155-76981577698057/AnsiballZ_systemd.py'
Nov 26 11:36:16 compute-0 sudo[61453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:36:17 compute-0 python3.9[61455]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 11:36:17 compute-0 systemd[1]: Reloading.
Nov 26 11:36:17 compute-0 systemd-rc-local-generator[61475]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:36:17 compute-0 systemd-sysv-generator[61482]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:36:17 compute-0 systemd[1]: Reloading.
Nov 26 11:36:17 compute-0 systemd-rc-local-generator[61513]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:36:17 compute-0 systemd-sysv-generator[61516]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:36:17 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Nov 26 11:36:17 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Nov 26 11:36:17 compute-0 sudo[61453]: pam_unix(sudo:session): session closed for user root
Nov 26 11:36:17 compute-0 sudo[61681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnrsvnqajrxpdpssevokjtnqgbgsqvos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156977.6967623-163-186321192614874/AnsiballZ_stat.py'
Nov 26 11:36:17 compute-0 sudo[61681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:36:18 compute-0 python3.9[61683]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:36:18 compute-0 sudo[61681]: pam_unix(sudo:session): session closed for user root
Nov 26 11:36:18 compute-0 sudo[61804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omkqtwoymiimthdvvmfvrqnrxayldgpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156977.6967623-163-186321192614874/AnsiballZ_copy.py'
Nov 26 11:36:18 compute-0 sudo[61804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:36:18 compute-0 python3.9[61806]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764156977.6967623-163-186321192614874/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:36:18 compute-0 sudo[61804]: pam_unix(sudo:session): session closed for user root
Nov 26 11:36:18 compute-0 sudo[61956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugvwumwzmfkvwwcvilvgijykwxzosmze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156978.5263243-178-184056412073362/AnsiballZ_stat.py'
Nov 26 11:36:18 compute-0 sudo[61956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:36:18 compute-0 python3.9[61958]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:36:18 compute-0 sudo[61956]: pam_unix(sudo:session): session closed for user root
Nov 26 11:36:19 compute-0 sudo[62079]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqotjqcsrgglvxvnkotrbuxzkyvirvzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156978.5263243-178-184056412073362/AnsiballZ_copy.py'
Nov 26 11:36:19 compute-0 sudo[62079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:36:19 compute-0 python3.9[62081]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764156978.5263243-178-184056412073362/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:36:19 compute-0 sudo[62079]: pam_unix(sudo:session): session closed for user root
Nov 26 11:36:19 compute-0 sudo[62231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbdzkcjlbzexlnfbztrjubdwcorcynmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156979.376345-193-276389529527052/AnsiballZ_systemd.py'
Nov 26 11:36:19 compute-0 sudo[62231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:36:19 compute-0 python3.9[62233]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 11:36:19 compute-0 systemd[1]: Reloading.
Nov 26 11:36:19 compute-0 systemd-sysv-generator[62256]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:36:19 compute-0 systemd-rc-local-generator[62253]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:36:20 compute-0 systemd[1]: Reloading.
Nov 26 11:36:20 compute-0 systemd-rc-local-generator[62290]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:36:20 compute-0 systemd-sysv-generator[62293]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:36:20 compute-0 systemd[1]: Starting Create netns directory...
Nov 26 11:36:20 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 26 11:36:20 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 26 11:36:20 compute-0 systemd[1]: Finished Create netns directory.
Nov 26 11:36:20 compute-0 sudo[62231]: pam_unix(sudo:session): session closed for user root
Nov 26 11:36:20 compute-0 python3.9[62458]: ansible-ansible.builtin.service_facts Invoked
Nov 26 11:36:20 compute-0 network[62475]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 26 11:36:20 compute-0 network[62476]: 'network-scripts' will be removed from distribution in near future.
Nov 26 11:36:20 compute-0 network[62477]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 26 11:36:22 compute-0 sudo[62737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlengkkktfkeohwztqccdqzarlsoogig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156982.5718458-209-14029394188744/AnsiballZ_systemd.py'
Nov 26 11:36:22 compute-0 sudo[62737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:36:22 compute-0 python3.9[62739]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 11:36:23 compute-0 systemd[1]: Reloading.
Nov 26 11:36:23 compute-0 systemd-rc-local-generator[62761]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:36:23 compute-0 systemd-sysv-generator[62765]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:36:23 compute-0 systemd[1]: Stopping IPv4 firewall with iptables...
Nov 26 11:36:23 compute-0 iptables.init[62779]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Nov 26 11:36:23 compute-0 iptables.init[62779]: iptables: Flushing firewall rules: [  OK  ]
Nov 26 11:36:23 compute-0 systemd[1]: iptables.service: Deactivated successfully.
Nov 26 11:36:23 compute-0 systemd[1]: Stopped IPv4 firewall with iptables.
Nov 26 11:36:23 compute-0 sudo[62737]: pam_unix(sudo:session): session closed for user root
Nov 26 11:36:23 compute-0 sudo[62973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzuejyslyhbjhkzuasgmaoanrdvvlfce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156983.589595-209-213999075088008/AnsiballZ_systemd.py'
Nov 26 11:36:23 compute-0 sudo[62973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:36:24 compute-0 python3.9[62975]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 11:36:24 compute-0 sudo[62973]: pam_unix(sudo:session): session closed for user root
Nov 26 11:36:24 compute-0 sudo[63127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulbdhbqyqnxnpzcjekmdjopjzyraaqdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156984.2158554-225-44761952293574/AnsiballZ_systemd.py'
Nov 26 11:36:24 compute-0 sudo[63127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:36:24 compute-0 python3.9[63129]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 11:36:24 compute-0 systemd[1]: Reloading.
Nov 26 11:36:24 compute-0 systemd-sysv-generator[63158]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:36:24 compute-0 systemd-rc-local-generator[63155]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:36:24 compute-0 systemd[1]: Starting Netfilter Tables...
Nov 26 11:36:24 compute-0 systemd[1]: Finished Netfilter Tables.
Nov 26 11:36:24 compute-0 sudo[63127]: pam_unix(sudo:session): session closed for user root
Nov 26 11:36:25 compute-0 sudo[63319]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyfuqsfyvzwcuzdsexpkhntinfouahyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156985.01977-233-140801661176363/AnsiballZ_command.py'
Nov 26 11:36:25 compute-0 sudo[63319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:36:25 compute-0 python3.9[63321]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:36:25 compute-0 sudo[63319]: pam_unix(sudo:session): session closed for user root
Nov 26 11:36:26 compute-0 sudo[63472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qolztouxgvqgqftktltphpbsvbjhipdu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156985.8107383-247-9951967532903/AnsiballZ_stat.py'
Nov 26 11:36:26 compute-0 sudo[63472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:36:26 compute-0 python3.9[63474]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:36:26 compute-0 sudo[63472]: pam_unix(sudo:session): session closed for user root
Nov 26 11:36:26 compute-0 sudo[63597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgfojdjoxpcejpvaevwvotvpwfyosrwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156985.8107383-247-9951967532903/AnsiballZ_copy.py'
Nov 26 11:36:26 compute-0 sudo[63597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:36:26 compute-0 python3.9[63599]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764156985.8107383-247-9951967532903/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:36:26 compute-0 sudo[63597]: pam_unix(sudo:session): session closed for user root
Nov 26 11:36:26 compute-0 sudo[63750]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvmhthuxrqbugrhxltrlsspfewxpthcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156986.6967602-262-123259734162254/AnsiballZ_systemd.py'
Nov 26 11:36:26 compute-0 sudo[63750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:36:27 compute-0 python3.9[63752]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 11:36:27 compute-0 systemd[1]: Reloading OpenSSH server daemon...
Nov 26 11:36:27 compute-0 sshd[961]: Received SIGHUP; restarting.
Nov 26 11:36:27 compute-0 sshd[961]: Server listening on 0.0.0.0 port 22.
Nov 26 11:36:27 compute-0 sshd[961]: Server listening on :: port 22.
Nov 26 11:36:27 compute-0 systemd[1]: Reloaded OpenSSH server daemon.
Nov 26 11:36:27 compute-0 sudo[63750]: pam_unix(sudo:session): session closed for user root
Nov 26 11:36:27 compute-0 sudo[63906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlkoqnlitmtsgdxjciaarjzvjromsqiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156987.2973354-270-119762660940650/AnsiballZ_file.py'
Nov 26 11:36:27 compute-0 sudo[63906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:36:27 compute-0 python3.9[63908]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:36:27 compute-0 sudo[63906]: pam_unix(sudo:session): session closed for user root
Nov 26 11:36:27 compute-0 sudo[64058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrrovpeqsdmaleimitskxemksivmimue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156987.7497277-278-224467917312806/AnsiballZ_stat.py'
Nov 26 11:36:27 compute-0 sudo[64058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:36:28 compute-0 python3.9[64060]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:36:28 compute-0 sudo[64058]: pam_unix(sudo:session): session closed for user root
Nov 26 11:36:28 compute-0 sudo[64181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqutlxkmzpaermzkkvjzdciinukujtdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156987.7497277-278-224467917312806/AnsiballZ_copy.py'
Nov 26 11:36:28 compute-0 sudo[64181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:36:28 compute-0 python3.9[64183]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764156987.7497277-278-224467917312806/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:36:28 compute-0 sudo[64181]: pam_unix(sudo:session): session closed for user root
Nov 26 11:36:28 compute-0 sudo[64333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qealhosaqzmgclkyekmgohgrbmxhkgjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156988.6852381-296-83089894151921/AnsiballZ_timezone.py'
Nov 26 11:36:28 compute-0 sudo[64333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:36:29 compute-0 python3.9[64335]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 26 11:36:29 compute-0 systemd[1]: Starting Time & Date Service...
Nov 26 11:36:29 compute-0 systemd[1]: Started Time & Date Service.
Nov 26 11:36:29 compute-0 sudo[64333]: pam_unix(sudo:session): session closed for user root
Nov 26 11:36:29 compute-0 sudo[64489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edgbmqqzeepgrmmyzxnnfkflzguwmdcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156989.3940804-305-176093109358544/AnsiballZ_file.py'
Nov 26 11:36:29 compute-0 sudo[64489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:36:29 compute-0 python3.9[64491]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:36:29 compute-0 sudo[64489]: pam_unix(sudo:session): session closed for user root
Nov 26 11:36:30 compute-0 sudo[64641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-soyqbsvdyojpgsroeldjcnxcohpimiqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156989.8513405-313-134578454666642/AnsiballZ_stat.py'
Nov 26 11:36:30 compute-0 sudo[64641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:36:30 compute-0 python3.9[64643]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:36:30 compute-0 sudo[64641]: pam_unix(sudo:session): session closed for user root
Nov 26 11:36:30 compute-0 sudo[64764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdhvrsqqcojxxbqjdavcywqwihgzlaos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156989.8513405-313-134578454666642/AnsiballZ_copy.py'
Nov 26 11:36:30 compute-0 sudo[64764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:36:30 compute-0 python3.9[64766]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764156989.8513405-313-134578454666642/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:36:30 compute-0 sudo[64764]: pam_unix(sudo:session): session closed for user root
Nov 26 11:36:30 compute-0 sudo[64916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aywwprphohcpbetrqkkhgdgqlmrznctv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156990.6912155-328-113422981227207/AnsiballZ_stat.py'
Nov 26 11:36:30 compute-0 sudo[64916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:36:31 compute-0 python3.9[64918]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:36:31 compute-0 sudo[64916]: pam_unix(sudo:session): session closed for user root
Nov 26 11:36:31 compute-0 sudo[65039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkqnbmldspklvbvahfrceyicbdthnicw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156990.6912155-328-113422981227207/AnsiballZ_copy.py'
Nov 26 11:36:31 compute-0 sudo[65039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:36:31 compute-0 python3.9[65041]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764156990.6912155-328-113422981227207/.source.yaml _original_basename=.0fnb_u9t follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:36:31 compute-0 sudo[65039]: pam_unix(sudo:session): session closed for user root
Nov 26 11:36:31 compute-0 sudo[65191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okljnlzsvqhemrhsnygqsalwatbvbtwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156991.5248635-343-69308805608069/AnsiballZ_stat.py'
Nov 26 11:36:31 compute-0 sudo[65191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:36:31 compute-0 python3.9[65193]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:36:31 compute-0 sudo[65191]: pam_unix(sudo:session): session closed for user root
Nov 26 11:36:32 compute-0 sudo[65314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgirggorojvlpxrkcxkelhxrsditkjym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156991.5248635-343-69308805608069/AnsiballZ_copy.py'
Nov 26 11:36:32 compute-0 sudo[65314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:36:32 compute-0 python3.9[65316]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764156991.5248635-343-69308805608069/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:36:32 compute-0 sudo[65314]: pam_unix(sudo:session): session closed for user root
Nov 26 11:36:32 compute-0 sudo[65466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tryzbcnuzkxdczyivvtwifdexfnovhnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156992.4049067-358-123851819268156/AnsiballZ_command.py'
Nov 26 11:36:32 compute-0 sudo[65466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:36:32 compute-0 python3.9[65468]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:36:32 compute-0 sudo[65466]: pam_unix(sudo:session): session closed for user root
Nov 26 11:36:33 compute-0 sudo[65619]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgbvzlyusyagkiwpnoqbdblgzjhxpigl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156992.8708665-366-251309887447393/AnsiballZ_command.py'
Nov 26 11:36:33 compute-0 sudo[65619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:36:33 compute-0 python3.9[65621]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:36:33 compute-0 sudo[65619]: pam_unix(sudo:session): session closed for user root
Nov 26 11:36:33 compute-0 sudo[65772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmwtivnjljixtaqvuuubpvntyguhlhgh ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764156993.3366024-374-198257805811928/AnsiballZ_edpm_nftables_from_files.py'
Nov 26 11:36:33 compute-0 sudo[65772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:36:33 compute-0 python3[65774]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 26 11:36:33 compute-0 sudo[65772]: pam_unix(sudo:session): session closed for user root
Nov 26 11:36:34 compute-0 sudo[65924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acybimzuzhzbznwflfxcrilihgagqabf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156994.020214-382-45451079672998/AnsiballZ_stat.py'
Nov 26 11:36:34 compute-0 sudo[65924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:36:34 compute-0 python3.9[65926]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:36:34 compute-0 sudo[65924]: pam_unix(sudo:session): session closed for user root
Nov 26 11:36:34 compute-0 sudo[66047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iftrgndhkdazhsdnklgudinfbsvclbcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156994.020214-382-45451079672998/AnsiballZ_copy.py'
Nov 26 11:36:34 compute-0 sudo[66047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:36:34 compute-0 python3.9[66049]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764156994.020214-382-45451079672998/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:36:34 compute-0 sudo[66047]: pam_unix(sudo:session): session closed for user root
Nov 26 11:36:35 compute-0 sudo[66199]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkjniiyuabvctbcjwbpxfwprygvsuwge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156994.8766055-397-114299440366575/AnsiballZ_stat.py'
Nov 26 11:36:35 compute-0 sudo[66199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:36:35 compute-0 python3.9[66201]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:36:35 compute-0 sudo[66199]: pam_unix(sudo:session): session closed for user root
Nov 26 11:36:35 compute-0 sudo[66322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgwyuoskzvxglolvwmdzfrczuscvxojk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156994.8766055-397-114299440366575/AnsiballZ_copy.py'
Nov 26 11:36:35 compute-0 sudo[66322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:36:35 compute-0 python3.9[66324]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764156994.8766055-397-114299440366575/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:36:35 compute-0 sudo[66322]: pam_unix(sudo:session): session closed for user root
Nov 26 11:36:35 compute-0 sudo[66474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpmmoogjoprkrixdqtfpwuucwsvklfpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156995.7556372-412-146644198182184/AnsiballZ_stat.py'
Nov 26 11:36:35 compute-0 sudo[66474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:36:36 compute-0 python3.9[66476]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:36:36 compute-0 sudo[66474]: pam_unix(sudo:session): session closed for user root
Nov 26 11:36:36 compute-0 sudo[66597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfnsamewmryltpvtwzvszxtryozlewnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156995.7556372-412-146644198182184/AnsiballZ_copy.py'
Nov 26 11:36:36 compute-0 sudo[66597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:36:36 compute-0 python3.9[66599]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764156995.7556372-412-146644198182184/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:36:36 compute-0 sudo[66597]: pam_unix(sudo:session): session closed for user root
Nov 26 11:36:36 compute-0 sudo[66749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tiqivqzznkqbwjzrhrjkwsgasemzakpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156996.6271636-427-3205038703269/AnsiballZ_stat.py'
Nov 26 11:36:36 compute-0 sudo[66749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:36:36 compute-0 python3.9[66751]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:36:36 compute-0 sudo[66749]: pam_unix(sudo:session): session closed for user root
Nov 26 11:36:37 compute-0 sudo[66872]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-barvbzdgbuyosvdxjqmmbdeewuocznat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156996.6271636-427-3205038703269/AnsiballZ_copy.py'
Nov 26 11:36:37 compute-0 sudo[66872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:36:37 compute-0 python3.9[66874]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764156996.6271636-427-3205038703269/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:36:37 compute-0 sudo[66872]: pam_unix(sudo:session): session closed for user root
Nov 26 11:36:37 compute-0 sudo[67024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epudlhhfpwbsuvjsdwnfqprjtqcirohi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156997.4769824-442-71705117695628/AnsiballZ_stat.py'
Nov 26 11:36:37 compute-0 sudo[67024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:36:37 compute-0 python3.9[67026]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:36:37 compute-0 sudo[67024]: pam_unix(sudo:session): session closed for user root
Nov 26 11:36:38 compute-0 sudo[67147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qoubtapxulslzwjvxmzbvxrmcdvgdhmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156997.4769824-442-71705117695628/AnsiballZ_copy.py'
Nov 26 11:36:38 compute-0 sudo[67147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:36:38 compute-0 python3.9[67149]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764156997.4769824-442-71705117695628/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:36:38 compute-0 sudo[67147]: pam_unix(sudo:session): session closed for user root
Nov 26 11:36:38 compute-0 sudo[67299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljzfjdpsdipjuxschmauwlifbusviltq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156998.4056327-457-278582813795560/AnsiballZ_file.py'
Nov 26 11:36:38 compute-0 sudo[67299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:36:38 compute-0 python3.9[67301]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:36:38 compute-0 sudo[67299]: pam_unix(sudo:session): session closed for user root
Nov 26 11:36:39 compute-0 sudo[67451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lerrjvejlyxvxkmzdkfgafaztkohcblg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156998.8647923-465-232826620245477/AnsiballZ_command.py'
Nov 26 11:36:39 compute-0 sudo[67451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:36:39 compute-0 python3.9[67453]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:36:39 compute-0 sudo[67451]: pam_unix(sudo:session): session closed for user root
Nov 26 11:36:39 compute-0 sudo[67610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikymebmdccevgwzbbgkprbsckvfgauzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764156999.3758104-473-28910714335190/AnsiballZ_blockinfile.py'
Nov 26 11:36:39 compute-0 sudo[67610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:36:39 compute-0 python3.9[67612]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:36:39 compute-0 sudo[67610]: pam_unix(sudo:session): session closed for user root
Nov 26 11:36:40 compute-0 sudo[67763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-terebvvbzjbxyuxyskpkjbayiiaeszlc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157000.0793858-482-50099494844935/AnsiballZ_file.py'
Nov 26 11:36:40 compute-0 sudo[67763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:36:40 compute-0 python3.9[67765]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:36:40 compute-0 sudo[67763]: pam_unix(sudo:session): session closed for user root
Nov 26 11:36:40 compute-0 sudo[67915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdutavroxkyillonhaintqfoarozjyyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157000.5134153-482-273554213556473/AnsiballZ_file.py'
Nov 26 11:36:40 compute-0 sudo[67915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:36:40 compute-0 python3.9[67917]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:36:40 compute-0 sudo[67915]: pam_unix(sudo:session): session closed for user root
Nov 26 11:36:41 compute-0 sudo[68067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uawbnaiphbpnqpzhxlwyqnkqhlzibrjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157000.9780893-497-162146049096250/AnsiballZ_mount.py'
Nov 26 11:36:41 compute-0 sudo[68067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:36:41 compute-0 python3.9[68069]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 26 11:36:41 compute-0 sudo[68067]: pam_unix(sudo:session): session closed for user root
Nov 26 11:36:41 compute-0 sudo[68220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnoglhyinzpzqxpkakyumjhojdsutxjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157001.6071386-497-171804766187732/AnsiballZ_mount.py'
Nov 26 11:36:41 compute-0 sudo[68220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:36:41 compute-0 python3.9[68222]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 26 11:36:41 compute-0 sudo[68220]: pam_unix(sudo:session): session closed for user root
Nov 26 11:36:42 compute-0 sshd-session[59020]: Connection closed by 192.168.122.30 port 55864
Nov 26 11:36:42 compute-0 sshd-session[59017]: pam_unix(sshd:session): session closed for user zuul
Nov 26 11:36:42 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Nov 26 11:36:42 compute-0 systemd[1]: session-13.scope: Consumed 23.576s CPU time.
Nov 26 11:36:42 compute-0 systemd-logind[744]: Session 13 logged out. Waiting for processes to exit.
Nov 26 11:36:42 compute-0 systemd-logind[744]: Removed session 13.
Nov 26 11:36:48 compute-0 sshd-session[68248]: Accepted publickey for zuul from 192.168.122.30 port 36504 ssh2: ECDSA SHA256:Ri1FttUDYah1mI7cQP4QF8tE4Jqe/KzhJvhCRDggR5A
Nov 26 11:36:48 compute-0 systemd-logind[744]: New session 14 of user zuul.
Nov 26 11:36:48 compute-0 systemd[1]: Started Session 14 of User zuul.
Nov 26 11:36:48 compute-0 sshd-session[68248]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 26 11:36:48 compute-0 sudo[68401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iodvhvudvhlsaeiubecggyvutansdqfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157008.3143184-16-117415925753013/AnsiballZ_tempfile.py'
Nov 26 11:36:48 compute-0 sudo[68401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:36:48 compute-0 python3.9[68403]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 26 11:36:48 compute-0 sudo[68401]: pam_unix(sudo:session): session closed for user root
Nov 26 11:36:49 compute-0 sudo[68553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlkpylzcnojfkbqbbbxlufpjyvzksnuw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157008.9406927-28-44157169175367/AnsiballZ_stat.py'
Nov 26 11:36:49 compute-0 sudo[68553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:36:49 compute-0 python3.9[68555]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 11:36:49 compute-0 sudo[68553]: pam_unix(sudo:session): session closed for user root
Nov 26 11:36:49 compute-0 sudo[68705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwncfxkghovythmskphmxrrggnrrpooe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157009.525895-38-242947130242157/AnsiballZ_setup.py'
Nov 26 11:36:49 compute-0 sudo[68705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:36:50 compute-0 python3.9[68707]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 11:36:50 compute-0 sudo[68705]: pam_unix(sudo:session): session closed for user root
Nov 26 11:36:50 compute-0 sudo[68857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqpjfeeyemeyocgkeyjucznhiyctvgrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157010.3416378-47-233185283344289/AnsiballZ_blockinfile.py'
Nov 26 11:36:50 compute-0 sudo[68857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:36:50 compute-0 python3.9[68859]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCdu1hnIRd4z8G3I/mWuSheNmPiy5JmJewQcNJK8Xnh8+3RkH5Ir5TjoWjcBbus7LjOWI2vB4aZJsvabWyo7XjKyNNxzQ7T+UUtFQHHtIRypSfnRd9wdyQCvNrlkMJ7DLKuVhyc+WAxU7ggNsuqulmsir7MeF5F5u7PpYIFEz55Zw1rMt0Z3DfE7mQzK0SkfNPlKPjVcnsomTnv/2gusmTD/r89MrE1qZVfvp6hlUFt+tTSGrBDlY7nlFn/QezWHpVltfe60IjjlT4ElFFphHl9gsTZX+05KYpO/Uebsxd+fdVUMeE7mHasJ85ZtnVr1e4XfjGNZXAbwMzGT4AsuKukBD2hHY9N2iY2muRygKVb2Dy9T/6KNr7UESlajeu4d+dzV38+cpl+yX0UJifpxrziOs9FoRtXXtvHMgBqhEeMPwM3JVmkHRYuVTgZmkT5hp+701rg/kUmrtMORp4Pz+cPNEf9bBh3MolxoX2ywMemm+X4pQ2q0SkObR2wVPDwIuM=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKJRMprkU6bQh71XlfALiaL1rgqAMYtwVhOv3RB2wXcv
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFDitpoLOaesAk4S4uxyeXlnPZ6G1ds/IGaDtcgfENrpDvSwe8nWJ+j940dFwDP4H7TYghuxWGo6MCtAEhXya7c=
                                             create=True mode=0644 path=/tmp/ansible.xgs_avi8 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:36:50 compute-0 sudo[68857]: pam_unix(sudo:session): session closed for user root
Nov 26 11:36:51 compute-0 sudo[69009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcynmfciivnhiodzkmjfmjoxvehyveja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157010.9310691-55-100509547533031/AnsiballZ_command.py'
Nov 26 11:36:51 compute-0 sudo[69009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:36:51 compute-0 python3.9[69011]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.xgs_avi8' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:36:51 compute-0 sudo[69009]: pam_unix(sudo:session): session closed for user root
Nov 26 11:36:51 compute-0 sudo[69163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljsvqhblipgzugzciabqhqxylzoprxez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157011.523967-63-253200522051799/AnsiballZ_file.py'
Nov 26 11:36:51 compute-0 sudo[69163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:36:52 compute-0 python3.9[69165]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.xgs_avi8 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:36:52 compute-0 sudo[69163]: pam_unix(sudo:session): session closed for user root
Nov 26 11:36:52 compute-0 sshd-session[68251]: Connection closed by 192.168.122.30 port 36504
Nov 26 11:36:52 compute-0 sshd-session[68248]: pam_unix(sshd:session): session closed for user zuul
Nov 26 11:36:52 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Nov 26 11:36:52 compute-0 systemd[1]: session-14.scope: Consumed 2.330s CPU time.
Nov 26 11:36:52 compute-0 systemd-logind[744]: Session 14 logged out. Waiting for processes to exit.
Nov 26 11:36:52 compute-0 systemd-logind[744]: Removed session 14.
Nov 26 11:36:57 compute-0 sshd-session[69190]: Accepted publickey for zuul from 192.168.122.30 port 43056 ssh2: ECDSA SHA256:Ri1FttUDYah1mI7cQP4QF8tE4Jqe/KzhJvhCRDggR5A
Nov 26 11:36:57 compute-0 systemd-logind[744]: New session 15 of user zuul.
Nov 26 11:36:57 compute-0 systemd[1]: Started Session 15 of User zuul.
Nov 26 11:36:57 compute-0 sshd-session[69190]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 26 11:36:58 compute-0 python3.9[69343]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 11:36:58 compute-0 sudo[69497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vslhrpguembrzpqbydpjipilhaqsmobx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157018.2995627-32-223006367306072/AnsiballZ_systemd.py'
Nov 26 11:36:58 compute-0 sudo[69497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:36:58 compute-0 python3.9[69499]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 26 11:36:58 compute-0 sudo[69497]: pam_unix(sudo:session): session closed for user root
Nov 26 11:36:59 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 26 11:36:59 compute-0 sudo[69653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlpvoqehwbznnsnbaelevdcoyvwdmmaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157019.1120644-40-155950116332197/AnsiballZ_systemd.py'
Nov 26 11:36:59 compute-0 sudo[69653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:36:59 compute-0 python3.9[69655]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 11:36:59 compute-0 sudo[69653]: pam_unix(sudo:session): session closed for user root
Nov 26 11:37:00 compute-0 sudo[69806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfuzlnfpeeongsfqtgqifnzupwtkyaka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157019.7219772-49-200610779794058/AnsiballZ_command.py'
Nov 26 11:37:00 compute-0 sudo[69806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:37:00 compute-0 python3.9[69808]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:37:00 compute-0 sudo[69806]: pam_unix(sudo:session): session closed for user root
Nov 26 11:37:00 compute-0 sudo[69959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogvobkweihnfdwdcnepanwsmxzypyduh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157020.3130789-57-135467609343178/AnsiballZ_stat.py'
Nov 26 11:37:00 compute-0 sudo[69959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:37:00 compute-0 python3.9[69961]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 11:37:00 compute-0 sudo[69959]: pam_unix(sudo:session): session closed for user root
Nov 26 11:37:01 compute-0 sudo[70113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkefrscbvhkifjuefahndltalpcfvegs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157020.9029102-65-187476804713567/AnsiballZ_command.py'
Nov 26 11:37:01 compute-0 sudo[70113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:37:01 compute-0 python3.9[70115]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:37:01 compute-0 sudo[70113]: pam_unix(sudo:session): session closed for user root
Nov 26 11:37:01 compute-0 sudo[70268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkllwuxwvbluegmbzgkzxumjzxvgoovf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157021.3605392-73-28159360390992/AnsiballZ_file.py'
Nov 26 11:37:01 compute-0 sudo[70268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:37:01 compute-0 python3.9[70270]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:37:01 compute-0 sudo[70268]: pam_unix(sudo:session): session closed for user root
Nov 26 11:37:02 compute-0 sshd-session[69193]: Connection closed by 192.168.122.30 port 43056
Nov 26 11:37:02 compute-0 sshd-session[69190]: pam_unix(sshd:session): session closed for user zuul
Nov 26 11:37:02 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Nov 26 11:37:02 compute-0 systemd[1]: session-15.scope: Consumed 3.091s CPU time.
Nov 26 11:37:02 compute-0 systemd-logind[744]: Session 15 logged out. Waiting for processes to exit.
Nov 26 11:37:02 compute-0 systemd-logind[744]: Removed session 15.
Nov 26 11:37:07 compute-0 sshd-session[70295]: Accepted publickey for zuul from 192.168.122.30 port 44604 ssh2: ECDSA SHA256:Ri1FttUDYah1mI7cQP4QF8tE4Jqe/KzhJvhCRDggR5A
Nov 26 11:37:07 compute-0 systemd-logind[744]: New session 16 of user zuul.
Nov 26 11:37:07 compute-0 systemd[1]: Started Session 16 of User zuul.
Nov 26 11:37:07 compute-0 sshd-session[70295]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 26 11:37:07 compute-0 python3.9[70448]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 11:37:08 compute-0 sudo[70602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmtvvnimfrhtqnekbhhclwhnergpfpxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157028.290033-34-278842490492455/AnsiballZ_setup.py'
Nov 26 11:37:08 compute-0 sudo[70602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:37:08 compute-0 python3.9[70604]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 11:37:08 compute-0 sudo[70602]: pam_unix(sudo:session): session closed for user root
Nov 26 11:37:09 compute-0 sudo[70686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npkotakkvrxxkdqduqkxqaudcnsbobme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157028.290033-34-278842490492455/AnsiballZ_dnf.py'
Nov 26 11:37:09 compute-0 sudo[70686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:37:09 compute-0 python3.9[70688]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 26 11:37:10 compute-0 sudo[70686]: pam_unix(sudo:session): session closed for user root
Nov 26 11:37:10 compute-0 python3.9[70839]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:37:11 compute-0 python3.9[70990]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 26 11:37:12 compute-0 python3.9[71140]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 11:37:13 compute-0 python3.9[71290]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 11:37:13 compute-0 sshd-session[70298]: Connection closed by 192.168.122.30 port 44604
Nov 26 11:37:13 compute-0 sshd-session[70295]: pam_unix(sshd:session): session closed for user zuul
Nov 26 11:37:13 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Nov 26 11:37:13 compute-0 systemd[1]: session-16.scope: Consumed 4.378s CPU time.
Nov 26 11:37:13 compute-0 systemd-logind[744]: Session 16 logged out. Waiting for processes to exit.
Nov 26 11:37:13 compute-0 systemd-logind[744]: Removed session 16.
Nov 26 11:37:19 compute-0 sshd-session[71315]: Accepted publickey for zuul from 192.168.26.201 port 45718 ssh2: RSA SHA256:zabNQ9AdBNRW68Pm3aADxeQV2ZE/dUlv4LQX84ptJZE
Nov 26 11:37:19 compute-0 systemd-logind[744]: New session 17 of user zuul.
Nov 26 11:37:19 compute-0 systemd[1]: Started Session 17 of User zuul.
Nov 26 11:37:19 compute-0 sshd-session[71315]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 26 11:37:19 compute-0 sudo[71391]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mshqitrxfvhicqkewqpnlqypewkclgxn ; /usr/bin/python3'
Nov 26 11:37:19 compute-0 sudo[71391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:37:20 compute-0 useradd[71395]: new group: name=ceph-admin, GID=42478
Nov 26 11:37:20 compute-0 useradd[71395]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Nov 26 11:37:20 compute-0 sudo[71391]: pam_unix(sudo:session): session closed for user root
Nov 26 11:37:20 compute-0 sudo[71477]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okznhzhladncwlcrixchrjyanzngfjcb ; /usr/bin/python3'
Nov 26 11:37:20 compute-0 sudo[71477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:37:20 compute-0 sudo[71477]: pam_unix(sudo:session): session closed for user root
Nov 26 11:37:20 compute-0 sudo[71550]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnkgptpbqsheustfwxlzmbnunumogqsq ; /usr/bin/python3'
Nov 26 11:37:20 compute-0 sudo[71550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:37:20 compute-0 sudo[71550]: pam_unix(sudo:session): session closed for user root
Nov 26 11:37:20 compute-0 sudo[71600]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxjgwnpelyizznrqcewzniagqdmgcfiz ; /usr/bin/python3'
Nov 26 11:37:20 compute-0 sudo[71600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:37:21 compute-0 sudo[71600]: pam_unix(sudo:session): session closed for user root
Nov 26 11:37:21 compute-0 sudo[71626]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpojxaqsuzmcubfnbvznzclheydmvvad ; /usr/bin/python3'
Nov 26 11:37:21 compute-0 sudo[71626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:37:21 compute-0 sudo[71626]: pam_unix(sudo:session): session closed for user root
Nov 26 11:37:21 compute-0 sudo[71652]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmvbehztvizgpvpzwsklmwrixnbvygqg ; /usr/bin/python3'
Nov 26 11:37:21 compute-0 sudo[71652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:37:21 compute-0 sudo[71652]: pam_unix(sudo:session): session closed for user root
Nov 26 11:37:21 compute-0 sudo[71678]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfjjtnhiikqrufsgqkgclpvvnnsdhtrh ; /usr/bin/python3'
Nov 26 11:37:21 compute-0 sudo[71678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:37:21 compute-0 sudo[71678]: pam_unix(sudo:session): session closed for user root
Nov 26 11:37:22 compute-0 sudo[71756]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewdogaykotzyjcbhtwqbnzmplkfvfjnm ; /usr/bin/python3'
Nov 26 11:37:22 compute-0 sudo[71756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:37:22 compute-0 sudo[71756]: pam_unix(sudo:session): session closed for user root
Nov 26 11:37:22 compute-0 sudo[71829]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqtccedgqmrfzpunhucbondukoxyutmx ; /usr/bin/python3'
Nov 26 11:37:22 compute-0 sudo[71829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:37:22 compute-0 sudo[71829]: pam_unix(sudo:session): session closed for user root
Nov 26 11:37:22 compute-0 sudo[71931]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwkealypbrktukyiddlieceuwumxhlya ; /usr/bin/python3'
Nov 26 11:37:22 compute-0 sudo[71931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:37:22 compute-0 sudo[71931]: pam_unix(sudo:session): session closed for user root
Nov 26 11:37:22 compute-0 sudo[72004]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vitpgdfumbpfwllnpaapwwfhlrljwcnu ; /usr/bin/python3'
Nov 26 11:37:22 compute-0 sudo[72004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:37:23 compute-0 sudo[72004]: pam_unix(sudo:session): session closed for user root
Nov 26 11:37:23 compute-0 sudo[72054]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqnsahyojlrhbtxikxokvdnhkjybeaxs ; /usr/bin/python3'
Nov 26 11:37:23 compute-0 sudo[72054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:37:23 compute-0 python3[72056]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 11:37:24 compute-0 sudo[72054]: pam_unix(sudo:session): session closed for user root
Nov 26 11:37:24 compute-0 sudo[72145]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvymwwtezpctccpaytnubgogwpswnsac ; /usr/bin/python3'
Nov 26 11:37:24 compute-0 sudo[72145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:37:24 compute-0 python3[72147]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 26 11:37:25 compute-0 sudo[72145]: pam_unix(sudo:session): session closed for user root
Nov 26 11:37:25 compute-0 sudo[72172]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czodkqtvtgwymwiklfoahsqxtgmbxvuf ; /usr/bin/python3'
Nov 26 11:37:25 compute-0 sudo[72172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:37:25 compute-0 python3[72174]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 26 11:37:25 compute-0 sudo[72172]: pam_unix(sudo:session): session closed for user root
Nov 26 11:37:25 compute-0 rsyslogd[960]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 11:37:26 compute-0 sudo[72199]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmxqlgppinopvfhuqagsxjbfpajenptr ; /usr/bin/python3'
Nov 26 11:37:26 compute-0 sudo[72199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:37:26 compute-0 python3[72201]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G
                                          losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:37:26 compute-0 kernel: loop: module loaded
Nov 26 11:37:26 compute-0 kernel: loop3: detected capacity change from 0 to 41943040
Nov 26 11:37:26 compute-0 sudo[72199]: pam_unix(sudo:session): session closed for user root
Nov 26 11:37:26 compute-0 sudo[72235]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vikkihnijngguqdtswvgryujcltgfqxk ; /usr/bin/python3'
Nov 26 11:37:26 compute-0 sudo[72235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:37:26 compute-0 python3[72237]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                          vgcreate ceph_vg0 /dev/loop3
                                          lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:37:26 compute-0 lvm[72240]: PV /dev/loop3 not used.
Nov 26 11:37:26 compute-0 lvm[72248]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 26 11:37:26 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Nov 26 11:37:26 compute-0 sudo[72235]: pam_unix(sudo:session): session closed for user root
Nov 26 11:37:26 compute-0 lvm[72251]:   1 logical volume(s) in volume group "ceph_vg0" now active
Nov 26 11:37:26 compute-0 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Nov 26 11:37:26 compute-0 sudo[72327]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtkzwciollzrlfmzredkuhmlosbfijai ; /usr/bin/python3'
Nov 26 11:37:26 compute-0 sudo[72327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:37:26 compute-0 python3[72329]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 11:37:26 compute-0 sudo[72327]: pam_unix(sudo:session): session closed for user root
Nov 26 11:37:27 compute-0 sudo[72400]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebguidvsutpmwwywiqkgdcoabehaobyp ; /usr/bin/python3'
Nov 26 11:37:27 compute-0 sudo[72400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:37:27 compute-0 python3[72402]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764157046.6815274-36598-111424964625935/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:37:27 compute-0 sudo[72400]: pam_unix(sudo:session): session closed for user root
Nov 26 11:37:27 compute-0 sudo[72450]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xztxivsluwelwblrmzytkyuvvjdjpzvl ; /usr/bin/python3'
Nov 26 11:37:27 compute-0 sudo[72450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:37:27 compute-0 python3[72452]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 11:37:27 compute-0 systemd[1]: Reloading.
Nov 26 11:37:27 compute-0 systemd-sysv-generator[72478]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:37:27 compute-0 systemd-rc-local-generator[72475]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:37:27 compute-0 systemd[1]: Starting Ceph OSD losetup...
Nov 26 11:37:27 compute-0 bash[72491]: /dev/loop3: [64513]:4194935 (/var/lib/ceph-osd-0.img)
Nov 26 11:37:27 compute-0 systemd[1]: Finished Ceph OSD losetup.
Nov 26 11:37:27 compute-0 lvm[72492]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 26 11:37:27 compute-0 lvm[72492]: VG ceph_vg0 finished
Nov 26 11:37:27 compute-0 sudo[72450]: pam_unix(sudo:session): session closed for user root
Nov 26 11:37:28 compute-0 sudo[72516]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unvokwcigjgxlbrdtsgmlyjckxenwfww ; /usr/bin/python3'
Nov 26 11:37:28 compute-0 sudo[72516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:37:28 compute-0 python3[72518]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 26 11:37:29 compute-0 sudo[72516]: pam_unix(sudo:session): session closed for user root
Nov 26 11:37:29 compute-0 sudo[72543]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtgtqodtkujboawqbywiamrntrrcaudf ; /usr/bin/python3'
Nov 26 11:37:29 compute-0 sudo[72543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:37:29 compute-0 python3[72545]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 26 11:37:29 compute-0 sudo[72543]: pam_unix(sudo:session): session closed for user root
Nov 26 11:37:29 compute-0 sudo[72569]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlceyodjinfuuqwodldbjtirattkkdeg ; /usr/bin/python3'
Nov 26 11:37:29 compute-0 sudo[72569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:37:29 compute-0 python3[72571]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=20G
                                          losetup /dev/loop4 /var/lib/ceph-osd-1.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:37:29 compute-0 kernel: loop4: detected capacity change from 0 to 41943040
Nov 26 11:37:29 compute-0 sudo[72569]: pam_unix(sudo:session): session closed for user root
Nov 26 11:37:29 compute-0 sudo[72601]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iuiqaurofbajbafkgkdtbfyjopygjjlk ; /usr/bin/python3'
Nov 26 11:37:29 compute-0 sudo[72601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:37:29 compute-0 python3[72603]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4
                                          vgcreate ceph_vg1 /dev/loop4
                                          lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:37:29 compute-0 lvm[72606]: PV /dev/loop4 not used.
Nov 26 11:37:29 compute-0 lvm[72616]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 26 11:37:29 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Nov 26 11:37:29 compute-0 sudo[72601]: pam_unix(sudo:session): session closed for user root
Nov 26 11:37:29 compute-0 lvm[72618]:   1 logical volume(s) in volume group "ceph_vg1" now active
Nov 26 11:37:30 compute-0 systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Nov 26 11:37:30 compute-0 sudo[72694]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzqailskyedckoqcbzkdigoydxpwzlim ; /usr/bin/python3'
Nov 26 11:37:30 compute-0 sudo[72694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:37:30 compute-0 python3[72696]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 11:37:30 compute-0 sudo[72694]: pam_unix(sudo:session): session closed for user root
Nov 26 11:37:30 compute-0 sudo[72767]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jluzzdgvfcmwwozkhmnwutgncqknfivm ; /usr/bin/python3'
Nov 26 11:37:30 compute-0 sudo[72767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:37:30 compute-0 python3[72769]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764157050.0435712-36625-159626229902201/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:37:30 compute-0 sudo[72767]: pam_unix(sudo:session): session closed for user root
Nov 26 11:37:30 compute-0 sudo[72817]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljvnlbzpxgerkmctanrofnkcsmgffztp ; /usr/bin/python3'
Nov 26 11:37:30 compute-0 sudo[72817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:37:30 compute-0 python3[72819]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 11:37:30 compute-0 systemd[1]: Reloading.
Nov 26 11:37:30 compute-0 systemd-rc-local-generator[72844]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:37:30 compute-0 systemd-sysv-generator[72848]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:37:31 compute-0 systemd[1]: Starting Ceph OSD losetup...
Nov 26 11:37:31 compute-0 bash[72859]: /dev/loop4: [64513]:4194936 (/var/lib/ceph-osd-1.img)
Nov 26 11:37:31 compute-0 systemd[1]: Finished Ceph OSD losetup.
Nov 26 11:37:31 compute-0 lvm[72860]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 26 11:37:31 compute-0 lvm[72860]: VG ceph_vg1 finished
Nov 26 11:37:31 compute-0 sudo[72817]: pam_unix(sudo:session): session closed for user root
Nov 26 11:37:31 compute-0 sudo[72884]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erqxiyhlsqvfapznnuxxcbrmaijvscgu ; /usr/bin/python3'
Nov 26 11:37:31 compute-0 sudo[72884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:37:31 compute-0 python3[72886]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 26 11:37:32 compute-0 sudo[72884]: pam_unix(sudo:session): session closed for user root
Nov 26 11:37:32 compute-0 sudo[72911]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shleuygjcwkhyervccvzewshxlyqichn ; /usr/bin/python3'
Nov 26 11:37:32 compute-0 sudo[72911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:37:32 compute-0 python3[72913]: ansible-ansible.builtin.stat Invoked with path=/dev/loop5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 26 11:37:32 compute-0 sudo[72911]: pam_unix(sudo:session): session closed for user root
Nov 26 11:37:32 compute-0 sudo[72937]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkcogiogvhrozxkyprsldafmdxgduycu ; /usr/bin/python3'
Nov 26 11:37:32 compute-0 sudo[72937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:37:32 compute-0 python3[72939]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-2.img bs=1 count=0 seek=20G
                                          losetup /dev/loop5 /var/lib/ceph-osd-2.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:37:32 compute-0 kernel: loop5: detected capacity change from 0 to 41943040
Nov 26 11:37:32 compute-0 sudo[72937]: pam_unix(sudo:session): session closed for user root
Nov 26 11:37:32 compute-0 sudo[72969]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqbleyrcvvpipdjygbeyvriznnjcxjsq ; /usr/bin/python3'
Nov 26 11:37:32 compute-0 sudo[72969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:37:32 compute-0 python3[72971]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop5
                                          vgcreate ceph_vg2 /dev/loop5
                                          lvcreate -n ceph_lv2 -l +100%FREE ceph_vg2
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:37:33 compute-0 lvm[72974]: PV /dev/loop5 not used.
Nov 26 11:37:33 compute-0 lvm[72984]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 26 11:37:33 compute-0 sudo[72969]: pam_unix(sudo:session): session closed for user root
Nov 26 11:37:33 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg2.
Nov 26 11:37:33 compute-0 lvm[72986]:   1 logical volume(s) in volume group "ceph_vg2" now active
Nov 26 11:37:33 compute-0 systemd[1]: lvm-activate-ceph_vg2.service: Deactivated successfully.
Nov 26 11:37:33 compute-0 sudo[73062]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugbpgezyhthkjnzicvlsfhkdbxnchwpk ; /usr/bin/python3'
Nov 26 11:37:33 compute-0 sudo[73062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:37:33 compute-0 python3[73064]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-2.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 11:37:33 compute-0 sudo[73062]: pam_unix(sudo:session): session closed for user root
Nov 26 11:37:33 compute-0 sudo[73135]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmyzqilivtlaxonjyrdqvsiqdabdfkay ; /usr/bin/python3'
Nov 26 11:37:33 compute-0 sudo[73135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:37:33 compute-0 python3[73137]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764157053.1970248-36652-166930657831503/source dest=/etc/systemd/system/ceph-osd-losetup-2.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=4c5b1bc5693c499ffe2edaa97d63f5df7075d845 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:37:33 compute-0 sudo[73135]: pam_unix(sudo:session): session closed for user root
Nov 26 11:37:33 compute-0 sudo[73185]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgqpytalivokkjbovowrmtlurfdahewl ; /usr/bin/python3'
Nov 26 11:37:33 compute-0 sudo[73185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:37:33 compute-0 python3[73187]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-2.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 11:37:34 compute-0 systemd[1]: Reloading.
Nov 26 11:37:34 compute-0 systemd-sysv-generator[73213]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:37:34 compute-0 systemd-rc-local-generator[73210]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:37:34 compute-0 systemd[1]: Starting Ceph OSD losetup...
Nov 26 11:37:34 compute-0 bash[73226]: /dev/loop5: [64513]:4194939 (/var/lib/ceph-osd-2.img)
Nov 26 11:37:34 compute-0 systemd[1]: Finished Ceph OSD losetup.
Nov 26 11:37:34 compute-0 lvm[73227]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 26 11:37:34 compute-0 lvm[73227]: VG ceph_vg2 finished
Nov 26 11:37:34 compute-0 sudo[73185]: pam_unix(sudo:session): session closed for user root
Nov 26 11:37:35 compute-0 python3[73251]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 11:37:37 compute-0 sudo[73342]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zoukuvgastiaaaadznginpunnwdxigcr ; /usr/bin/python3'
Nov 26 11:37:37 compute-0 sudo[73342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:37:37 compute-0 python3[73344]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 26 11:37:38 compute-0 groupadd[73350]: group added to /etc/group: name=cephadm, GID=992
Nov 26 11:37:38 compute-0 groupadd[73350]: group added to /etc/gshadow: name=cephadm
Nov 26 11:37:38 compute-0 groupadd[73350]: new group: name=cephadm, GID=992
Nov 26 11:37:38 compute-0 useradd[73357]: new user: name=cephadm, UID=992, GID=992, home=/var/lib/cephadm, shell=/bin/bash, from=none
Nov 26 11:37:38 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 26 11:37:38 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 26 11:37:38 compute-0 sudo[73342]: pam_unix(sudo:session): session closed for user root
Nov 26 11:37:38 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 26 11:37:38 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 26 11:37:38 compute-0 systemd[1]: run-r9bd77237093a42e293b14a5d513dfeb7.service: Deactivated successfully.
Nov 26 11:37:38 compute-0 sudo[73453]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgosjvjkjfduivttjlkjyvbhydskivwu ; /usr/bin/python3'
Nov 26 11:37:38 compute-0 sudo[73453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:37:38 compute-0 python3[73455]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 26 11:37:38 compute-0 sudo[73453]: pam_unix(sudo:session): session closed for user root
Nov 26 11:37:39 compute-0 sudo[73481]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpxpvksstnnhvtlyleklymgumfjimnln ; /usr/bin/python3'
Nov 26 11:37:39 compute-0 sudo[73481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:37:39 compute-0 python3[73483]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:37:39 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 11:37:39 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 11:37:39 compute-0 sudo[73481]: pam_unix(sudo:session): session closed for user root
Nov 26 11:37:39 compute-0 sudo[73538]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqniurfjasebgzudpsqgahpglcnhrcnh ; /usr/bin/python3'
Nov 26 11:37:39 compute-0 sudo[73538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:37:39 compute-0 python3[73540]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:37:39 compute-0 sudo[73538]: pam_unix(sudo:session): session closed for user root
Nov 26 11:37:39 compute-0 sudo[73564]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjqeipanzfywbvlpurjxbprhkbgbqwrp ; /usr/bin/python3'
Nov 26 11:37:39 compute-0 sudo[73564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:37:39 compute-0 python3[73566]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:37:39 compute-0 sudo[73564]: pam_unix(sudo:session): session closed for user root
Nov 26 11:37:40 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 11:37:40 compute-0 sudo[73642]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urmwwtlolheauqaabgnvcxhwoszgopwt ; /usr/bin/python3'
Nov 26 11:37:40 compute-0 sudo[73642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:37:40 compute-0 python3[73644]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 11:37:40 compute-0 sudo[73642]: pam_unix(sudo:session): session closed for user root
Nov 26 11:37:40 compute-0 sudo[73715]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvinqcopyfzlhofzmjikocpxpnbsuelg ; /usr/bin/python3'
Nov 26 11:37:40 compute-0 sudo[73715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:37:40 compute-0 python3[73717]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764157060.2389703-36800-274101633451114/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=bb83c53af4ffd926a3f1eafe26a8be437df6401f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:37:40 compute-0 sudo[73715]: pam_unix(sudo:session): session closed for user root
Nov 26 11:37:41 compute-0 sudo[73817]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnukidfgvyufhmmcverlsyjwvehjkmvs ; /usr/bin/python3'
Nov 26 11:37:41 compute-0 sudo[73817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:37:41 compute-0 python3[73819]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 11:37:41 compute-0 sudo[73817]: pam_unix(sudo:session): session closed for user root
Nov 26 11:37:41 compute-0 sudo[73890]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsufelqihojjqfxfogtrxphrxhdzpfhk ; /usr/bin/python3'
Nov 26 11:37:41 compute-0 sudo[73890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:37:41 compute-0 python3[73892]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764157061.011191-36818-99002698841287/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:37:41 compute-0 sudo[73890]: pam_unix(sudo:session): session closed for user root
Nov 26 11:37:41 compute-0 sudo[73940]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmdtplkrbeslovzactjmowfhqqjyqyzo ; /usr/bin/python3'
Nov 26 11:37:41 compute-0 sudo[73940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:37:41 compute-0 python3[73942]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 26 11:37:41 compute-0 sudo[73940]: pam_unix(sudo:session): session closed for user root
Nov 26 11:37:41 compute-0 sudo[73968]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-geededzodnmxmmdfeogvoacwvyonigbp ; /usr/bin/python3'
Nov 26 11:37:41 compute-0 sudo[73968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:37:41 compute-0 python3[73970]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 26 11:37:41 compute-0 sudo[73968]: pam_unix(sudo:session): session closed for user root
Nov 26 11:37:42 compute-0 sudo[73996]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skwkmciiixsiykwimzfhdnpioyrlwkku ; /usr/bin/python3'
Nov 26 11:37:42 compute-0 sudo[73996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:37:42 compute-0 python3[73998]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 26 11:37:42 compute-0 sudo[73996]: pam_unix(sudo:session): session closed for user root
Nov 26 11:37:42 compute-0 sudo[74024]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjwarylrfxwlqxblfcugrsuqflufppxa ; /usr/bin/python3'
Nov 26 11:37:42 compute-0 sudo[74024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:37:42 compute-0 python3[74026]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --skip-prepare-host --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100
                                           _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:37:42 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 11:37:42 compute-0 sshd-session[74040]: Accepted publickey for ceph-admin from 192.168.122.100 port 32826 ssh2: RSA SHA256:UwRHloH7+q4x7CI/eXsFrZa7OprktgY5vDgjNOULMBQ
Nov 26 11:37:42 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Nov 26 11:37:42 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 26 11:37:42 compute-0 systemd-logind[744]: New session 18 of user ceph-admin.
Nov 26 11:37:42 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 26 11:37:42 compute-0 systemd[1]: Starting User Manager for UID 42477...
Nov 26 11:37:42 compute-0 systemd[74044]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 26 11:37:42 compute-0 systemd[74044]: Queued start job for default target Main User Target.
Nov 26 11:37:42 compute-0 systemd[74044]: Created slice User Application Slice.
Nov 26 11:37:42 compute-0 systemd[74044]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 26 11:37:42 compute-0 systemd[74044]: Started Daily Cleanup of User's Temporary Directories.
Nov 26 11:37:42 compute-0 systemd[74044]: Reached target Paths.
Nov 26 11:37:42 compute-0 systemd[74044]: Reached target Timers.
Nov 26 11:37:42 compute-0 systemd[74044]: Starting D-Bus User Message Bus Socket...
Nov 26 11:37:42 compute-0 systemd[74044]: Starting Create User's Volatile Files and Directories...
Nov 26 11:37:42 compute-0 systemd[74044]: Listening on D-Bus User Message Bus Socket.
Nov 26 11:37:42 compute-0 systemd[74044]: Reached target Sockets.
Nov 26 11:37:42 compute-0 systemd[74044]: Finished Create User's Volatile Files and Directories.
Nov 26 11:37:42 compute-0 systemd[74044]: Reached target Basic System.
Nov 26 11:37:42 compute-0 systemd[74044]: Reached target Main User Target.
Nov 26 11:37:42 compute-0 systemd[74044]: Startup finished in 77ms.
Nov 26 11:37:42 compute-0 systemd[1]: Started User Manager for UID 42477.
Nov 26 11:37:42 compute-0 systemd[1]: Started Session 18 of User ceph-admin.
Nov 26 11:37:42 compute-0 sshd-session[74040]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 26 11:37:42 compute-0 sudo[74060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/echo
Nov 26 11:37:42 compute-0 sudo[74060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:37:42 compute-0 sudo[74060]: pam_unix(sudo:session): session closed for user root
Nov 26 11:37:42 compute-0 sshd-session[74059]: Received disconnect from 192.168.122.100 port 32826:11: disconnected by user
Nov 26 11:37:42 compute-0 sshd-session[74059]: Disconnected from user ceph-admin 192.168.122.100 port 32826
Nov 26 11:37:42 compute-0 sshd-session[74040]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 26 11:37:42 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Nov 26 11:37:42 compute-0 systemd-logind[744]: Session 18 logged out. Waiting for processes to exit.
Nov 26 11:37:42 compute-0 systemd-logind[744]: Removed session 18.
Nov 26 11:37:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat3163143503-lower\x2dmapped.mount: Deactivated successfully.
Nov 26 11:37:53 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Nov 26 11:37:53 compute-0 systemd[74044]: Activating special unit Exit the Session...
Nov 26 11:37:53 compute-0 systemd[74044]: Stopped target Main User Target.
Nov 26 11:37:53 compute-0 systemd[74044]: Stopped target Basic System.
Nov 26 11:37:53 compute-0 systemd[74044]: Stopped target Paths.
Nov 26 11:37:53 compute-0 systemd[74044]: Stopped target Sockets.
Nov 26 11:37:53 compute-0 systemd[74044]: Stopped target Timers.
Nov 26 11:37:53 compute-0 systemd[74044]: Stopped Mark boot as successful after the user session has run 2 minutes.
Nov 26 11:37:53 compute-0 systemd[74044]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 26 11:37:53 compute-0 systemd[74044]: Closed D-Bus User Message Bus Socket.
Nov 26 11:37:53 compute-0 systemd[74044]: Stopped Create User's Volatile Files and Directories.
Nov 26 11:37:53 compute-0 systemd[74044]: Removed slice User Application Slice.
Nov 26 11:37:53 compute-0 systemd[74044]: Reached target Shutdown.
Nov 26 11:37:53 compute-0 systemd[74044]: Finished Exit the Session.
Nov 26 11:37:53 compute-0 systemd[74044]: Reached target Exit the Session.
Nov 26 11:37:53 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Nov 26 11:37:53 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Nov 26 11:37:53 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Nov 26 11:37:53 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Nov 26 11:37:53 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Nov 26 11:37:53 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Nov 26 11:37:53 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Nov 26 11:37:57 compute-0 podman[74097]: 2025-11-26 11:37:57.061372831 +0000 UTC m=+14.087305764 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:37:57 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 11:37:57 compute-0 podman[74148]: 2025-11-26 11:37:57.104874311 +0000 UTC m=+0.026557878 container create 956dbd33ab8432e497b063efd30af8595ea5780a5ab3e3f405b9479ca7a0d097 (image=quay.io/ceph/ceph:v18, name=brave_dubinsky, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 26 11:37:57 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Nov 26 11:37:57 compute-0 systemd[1]: Started libpod-conmon-956dbd33ab8432e497b063efd30af8595ea5780a5ab3e3f405b9479ca7a0d097.scope.
Nov 26 11:37:57 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:37:57 compute-0 podman[74148]: 2025-11-26 11:37:57.165446452 +0000 UTC m=+0.087130040 container init 956dbd33ab8432e497b063efd30af8595ea5780a5ab3e3f405b9479ca7a0d097 (image=quay.io/ceph/ceph:v18, name=brave_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:37:57 compute-0 podman[74148]: 2025-11-26 11:37:57.170641972 +0000 UTC m=+0.092325539 container start 956dbd33ab8432e497b063efd30af8595ea5780a5ab3e3f405b9479ca7a0d097 (image=quay.io/ceph/ceph:v18, name=brave_dubinsky, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 26 11:37:57 compute-0 podman[74148]: 2025-11-26 11:37:57.171660784 +0000 UTC m=+0.093344352 container attach 956dbd33ab8432e497b063efd30af8595ea5780a5ab3e3f405b9479ca7a0d097 (image=quay.io/ceph/ceph:v18, name=brave_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 26 11:37:57 compute-0 podman[74148]: 2025-11-26 11:37:57.094173405 +0000 UTC m=+0.015856993 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:37:57 compute-0 brave_dubinsky[74162]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Nov 26 11:37:57 compute-0 systemd[1]: libpod-956dbd33ab8432e497b063efd30af8595ea5780a5ab3e3f405b9479ca7a0d097.scope: Deactivated successfully.
Nov 26 11:37:57 compute-0 conmon[74162]: conmon 956dbd33ab8432e497b0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-956dbd33ab8432e497b063efd30af8595ea5780a5ab3e3f405b9479ca7a0d097.scope/container/memory.events
Nov 26 11:37:57 compute-0 podman[74148]: 2025-11-26 11:37:57.429551957 +0000 UTC m=+0.351235525 container died 956dbd33ab8432e497b063efd30af8595ea5780a5ab3e3f405b9479ca7a0d097 (image=quay.io/ceph/ceph:v18, name=brave_dubinsky, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:37:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-22ed3bc37d7e91821383e9cb26261fd5bd610ca6902c256fdca36ecd9ac61e55-merged.mount: Deactivated successfully.
Nov 26 11:37:57 compute-0 podman[74148]: 2025-11-26 11:37:57.450535711 +0000 UTC m=+0.372219278 container remove 956dbd33ab8432e497b063efd30af8595ea5780a5ab3e3f405b9479ca7a0d097 (image=quay.io/ceph/ceph:v18, name=brave_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 26 11:37:57 compute-0 systemd[1]: libpod-conmon-956dbd33ab8432e497b063efd30af8595ea5780a5ab3e3f405b9479ca7a0d097.scope: Deactivated successfully.
Nov 26 11:37:57 compute-0 podman[74176]: 2025-11-26 11:37:57.491359058 +0000 UTC m=+0.026308449 container create a4474eb938263249a5314212a65a2da700710ffd0baeaad3b93182cdca31a9f7 (image=quay.io/ceph/ceph:v18, name=gallant_wilson, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:37:57 compute-0 systemd[1]: Started libpod-conmon-a4474eb938263249a5314212a65a2da700710ffd0baeaad3b93182cdca31a9f7.scope.
Nov 26 11:37:57 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:37:57 compute-0 podman[74176]: 2025-11-26 11:37:57.526815001 +0000 UTC m=+0.061764412 container init a4474eb938263249a5314212a65a2da700710ffd0baeaad3b93182cdca31a9f7 (image=quay.io/ceph/ceph:v18, name=gallant_wilson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:37:57 compute-0 podman[74176]: 2025-11-26 11:37:57.531194211 +0000 UTC m=+0.066143602 container start a4474eb938263249a5314212a65a2da700710ffd0baeaad3b93182cdca31a9f7 (image=quay.io/ceph/ceph:v18, name=gallant_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:37:57 compute-0 podman[74176]: 2025-11-26 11:37:57.532246466 +0000 UTC m=+0.067195877 container attach a4474eb938263249a5314212a65a2da700710ffd0baeaad3b93182cdca31a9f7 (image=quay.io/ceph/ceph:v18, name=gallant_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 26 11:37:57 compute-0 gallant_wilson[74191]: 167 167
Nov 26 11:37:57 compute-0 systemd[1]: libpod-a4474eb938263249a5314212a65a2da700710ffd0baeaad3b93182cdca31a9f7.scope: Deactivated successfully.
Nov 26 11:37:57 compute-0 conmon[74191]: conmon a4474eb938263249a531 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a4474eb938263249a5314212a65a2da700710ffd0baeaad3b93182cdca31a9f7.scope/container/memory.events
Nov 26 11:37:57 compute-0 podman[74176]: 2025-11-26 11:37:57.534245937 +0000 UTC m=+0.069195328 container died a4474eb938263249a5314212a65a2da700710ffd0baeaad3b93182cdca31a9f7 (image=quay.io/ceph/ceph:v18, name=gallant_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 26 11:37:57 compute-0 podman[74176]: 2025-11-26 11:37:57.551351214 +0000 UTC m=+0.086300605 container remove a4474eb938263249a5314212a65a2da700710ffd0baeaad3b93182cdca31a9f7 (image=quay.io/ceph/ceph:v18, name=gallant_wilson, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 26 11:37:57 compute-0 podman[74176]: 2025-11-26 11:37:57.481379212 +0000 UTC m=+0.016328623 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:37:57 compute-0 systemd[1]: libpod-conmon-a4474eb938263249a5314212a65a2da700710ffd0baeaad3b93182cdca31a9f7.scope: Deactivated successfully.
Nov 26 11:37:57 compute-0 podman[74205]: 2025-11-26 11:37:57.590245783 +0000 UTC m=+0.026009945 container create 712c4e2ff5d0821f4dc59f2590d31644c8a87b02f2dbb633986d01f8064f2cf1 (image=quay.io/ceph/ceph:v18, name=determined_bhabha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:37:57 compute-0 systemd[1]: Started libpod-conmon-712c4e2ff5d0821f4dc59f2590d31644c8a87b02f2dbb633986d01f8064f2cf1.scope.
Nov 26 11:37:57 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:37:57 compute-0 podman[74205]: 2025-11-26 11:37:57.626937377 +0000 UTC m=+0.062701548 container init 712c4e2ff5d0821f4dc59f2590d31644c8a87b02f2dbb633986d01f8064f2cf1 (image=quay.io/ceph/ceph:v18, name=determined_bhabha, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Nov 26 11:37:57 compute-0 podman[74205]: 2025-11-26 11:37:57.630932353 +0000 UTC m=+0.066696514 container start 712c4e2ff5d0821f4dc59f2590d31644c8a87b02f2dbb633986d01f8064f2cf1 (image=quay.io/ceph/ceph:v18, name=determined_bhabha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 26 11:37:57 compute-0 podman[74205]: 2025-11-26 11:37:57.632211816 +0000 UTC m=+0.067975987 container attach 712c4e2ff5d0821f4dc59f2590d31644c8a87b02f2dbb633986d01f8064f2cf1 (image=quay.io/ceph/ceph:v18, name=determined_bhabha, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:37:57 compute-0 determined_bhabha[74219]: AQCV5iZpyS58JhAAGSrHKqPGEHPJqXLHOxsmFA==
Nov 26 11:37:57 compute-0 systemd[1]: libpod-712c4e2ff5d0821f4dc59f2590d31644c8a87b02f2dbb633986d01f8064f2cf1.scope: Deactivated successfully.
Nov 26 11:37:57 compute-0 podman[74205]: 2025-11-26 11:37:57.647766026 +0000 UTC m=+0.083530187 container died 712c4e2ff5d0821f4dc59f2590d31644c8a87b02f2dbb633986d01f8064f2cf1 (image=quay.io/ceph/ceph:v18, name=determined_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 26 11:37:57 compute-0 podman[74205]: 2025-11-26 11:37:57.662372961 +0000 UTC m=+0.098137122 container remove 712c4e2ff5d0821f4dc59f2590d31644c8a87b02f2dbb633986d01f8064f2cf1 (image=quay.io/ceph/ceph:v18, name=determined_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 26 11:37:57 compute-0 podman[74205]: 2025-11-26 11:37:57.579916758 +0000 UTC m=+0.015680920 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:37:57 compute-0 systemd[1]: libpod-conmon-712c4e2ff5d0821f4dc59f2590d31644c8a87b02f2dbb633986d01f8064f2cf1.scope: Deactivated successfully.
Nov 26 11:37:57 compute-0 podman[74235]: 2025-11-26 11:37:57.701696999 +0000 UTC m=+0.026617701 container create 8d7175fe046d77e4a97511623d8a90814214983f1f57c794db9b894ce6c2a602 (image=quay.io/ceph/ceph:v18, name=magical_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 26 11:37:57 compute-0 systemd[1]: Started libpod-conmon-8d7175fe046d77e4a97511623d8a90814214983f1f57c794db9b894ce6c2a602.scope.
Nov 26 11:37:57 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:37:57 compute-0 podman[74235]: 2025-11-26 11:37:57.73695567 +0000 UTC m=+0.061876393 container init 8d7175fe046d77e4a97511623d8a90814214983f1f57c794db9b894ce6c2a602 (image=quay.io/ceph/ceph:v18, name=magical_darwin, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 26 11:37:57 compute-0 podman[74235]: 2025-11-26 11:37:57.740473215 +0000 UTC m=+0.065393917 container start 8d7175fe046d77e4a97511623d8a90814214983f1f57c794db9b894ce6c2a602 (image=quay.io/ceph/ceph:v18, name=magical_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:37:57 compute-0 podman[74235]: 2025-11-26 11:37:57.745662815 +0000 UTC m=+0.070583527 container attach 8d7175fe046d77e4a97511623d8a90814214983f1f57c794db9b894ce6c2a602 (image=quay.io/ceph/ceph:v18, name=magical_darwin, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 26 11:37:57 compute-0 magical_darwin[74251]: AQCV5iZp4mICLRAAoOPOvPIhwNihgJMo4IGE8w==
Nov 26 11:37:57 compute-0 systemd[1]: libpod-8d7175fe046d77e4a97511623d8a90814214983f1f57c794db9b894ce6c2a602.scope: Deactivated successfully.
Nov 26 11:37:57 compute-0 podman[74235]: 2025-11-26 11:37:57.757396288 +0000 UTC m=+0.082316990 container died 8d7175fe046d77e4a97511623d8a90814214983f1f57c794db9b894ce6c2a602 (image=quay.io/ceph/ceph:v18, name=magical_darwin, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:37:57 compute-0 podman[74235]: 2025-11-26 11:37:57.773213646 +0000 UTC m=+0.098134348 container remove 8d7175fe046d77e4a97511623d8a90814214983f1f57c794db9b894ce6c2a602 (image=quay.io/ceph/ceph:v18, name=magical_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 26 11:37:57 compute-0 podman[74235]: 2025-11-26 11:37:57.691734918 +0000 UTC m=+0.016655639 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:37:57 compute-0 systemd[1]: libpod-conmon-8d7175fe046d77e4a97511623d8a90814214983f1f57c794db9b894ce6c2a602.scope: Deactivated successfully.
Nov 26 11:37:57 compute-0 podman[74267]: 2025-11-26 11:37:57.8141626 +0000 UTC m=+0.026862903 container create c6392bed80cf1afd2eb7d065731235fae09a83fd4c69cb7d453abf8f6ae3cf04 (image=quay.io/ceph/ceph:v18, name=cranky_chaum, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 26 11:37:57 compute-0 systemd[1]: Started libpod-conmon-c6392bed80cf1afd2eb7d065731235fae09a83fd4c69cb7d453abf8f6ae3cf04.scope.
Nov 26 11:37:57 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:37:57 compute-0 podman[74267]: 2025-11-26 11:37:57.850376763 +0000 UTC m=+0.063077087 container init c6392bed80cf1afd2eb7d065731235fae09a83fd4c69cb7d453abf8f6ae3cf04 (image=quay.io/ceph/ceph:v18, name=cranky_chaum, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:37:57 compute-0 podman[74267]: 2025-11-26 11:37:57.854012421 +0000 UTC m=+0.066712725 container start c6392bed80cf1afd2eb7d065731235fae09a83fd4c69cb7d453abf8f6ae3cf04 (image=quay.io/ceph/ceph:v18, name=cranky_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:37:57 compute-0 podman[74267]: 2025-11-26 11:37:57.855663936 +0000 UTC m=+0.068364240 container attach c6392bed80cf1afd2eb7d065731235fae09a83fd4c69cb7d453abf8f6ae3cf04 (image=quay.io/ceph/ceph:v18, name=cranky_chaum, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:37:57 compute-0 cranky_chaum[74281]: AQCV5iZpKBDHMxAATpiuJw50fJ3OGveTyXdHXQ==
Nov 26 11:37:57 compute-0 systemd[1]: libpod-c6392bed80cf1afd2eb7d065731235fae09a83fd4c69cb7d453abf8f6ae3cf04.scope: Deactivated successfully.
Nov 26 11:37:57 compute-0 podman[74267]: 2025-11-26 11:37:57.870830315 +0000 UTC m=+0.083530619 container died c6392bed80cf1afd2eb7d065731235fae09a83fd4c69cb7d453abf8f6ae3cf04 (image=quay.io/ceph/ceph:v18, name=cranky_chaum, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:37:57 compute-0 podman[74267]: 2025-11-26 11:37:57.887978733 +0000 UTC m=+0.100679037 container remove c6392bed80cf1afd2eb7d065731235fae09a83fd4c69cb7d453abf8f6ae3cf04 (image=quay.io/ceph/ceph:v18, name=cranky_chaum, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:37:57 compute-0 podman[74267]: 2025-11-26 11:37:57.802913048 +0000 UTC m=+0.015613373 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:37:57 compute-0 systemd[1]: libpod-conmon-c6392bed80cf1afd2eb7d065731235fae09a83fd4c69cb7d453abf8f6ae3cf04.scope: Deactivated successfully.
Nov 26 11:37:57 compute-0 podman[74296]: 2025-11-26 11:37:57.931976548 +0000 UTC m=+0.027068080 container create 7249ae0ebf22875028ec7d0e096af6d810c1fc520e478858934326eb8d4553b1 (image=quay.io/ceph/ceph:v18, name=condescending_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:37:57 compute-0 systemd[1]: Started libpod-conmon-7249ae0ebf22875028ec7d0e096af6d810c1fc520e478858934326eb8d4553b1.scope.
Nov 26 11:37:57 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:37:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8dbef560bfe23dc4780a9fc62770c996ee31d1d5322cf3bb20424bc8f74927e/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Nov 26 11:37:57 compute-0 podman[74296]: 2025-11-26 11:37:57.980005388 +0000 UTC m=+0.075096920 container init 7249ae0ebf22875028ec7d0e096af6d810c1fc520e478858934326eb8d4553b1 (image=quay.io/ceph/ceph:v18, name=condescending_carver, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:37:57 compute-0 podman[74296]: 2025-11-26 11:37:57.985389214 +0000 UTC m=+0.080480746 container start 7249ae0ebf22875028ec7d0e096af6d810c1fc520e478858934326eb8d4553b1 (image=quay.io/ceph/ceph:v18, name=condescending_carver, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:37:57 compute-0 podman[74296]: 2025-11-26 11:37:57.986483558 +0000 UTC m=+0.081575090 container attach 7249ae0ebf22875028ec7d0e096af6d810c1fc520e478858934326eb8d4553b1 (image=quay.io/ceph/ceph:v18, name=condescending_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:37:58 compute-0 condescending_carver[74311]: /usr/bin/monmaptool: monmap file /tmp/monmap
Nov 26 11:37:58 compute-0 condescending_carver[74311]: setting min_mon_release = pacific
Nov 26 11:37:58 compute-0 condescending_carver[74311]: /usr/bin/monmaptool: set fsid to ebab460c-3fd7-5f66-aa87-e10c143123f7
Nov 26 11:37:58 compute-0 condescending_carver[74311]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Nov 26 11:37:58 compute-0 systemd[1]: libpod-7249ae0ebf22875028ec7d0e096af6d810c1fc520e478858934326eb8d4553b1.scope: Deactivated successfully.
Nov 26 11:37:58 compute-0 podman[74296]: 2025-11-26 11:37:58.007098114 +0000 UTC m=+0.102189646 container died 7249ae0ebf22875028ec7d0e096af6d810c1fc520e478858934326eb8d4553b1 (image=quay.io/ceph/ceph:v18, name=condescending_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 26 11:37:58 compute-0 podman[74296]: 2025-11-26 11:37:57.920624916 +0000 UTC m=+0.015716468 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:37:58 compute-0 podman[74296]: 2025-11-26 11:37:58.022778623 +0000 UTC m=+0.117870156 container remove 7249ae0ebf22875028ec7d0e096af6d810c1fc520e478858934326eb8d4553b1 (image=quay.io/ceph/ceph:v18, name=condescending_carver, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:37:58 compute-0 systemd[1]: libpod-conmon-7249ae0ebf22875028ec7d0e096af6d810c1fc520e478858934326eb8d4553b1.scope: Deactivated successfully.
Nov 26 11:37:58 compute-0 podman[74327]: 2025-11-26 11:37:58.063908239 +0000 UTC m=+0.026977420 container create 6de522ef25266012e4a5337a9cd0fc3a86d94aba3b1de12726330f6fe19beb7a (image=quay.io/ceph/ceph:v18, name=strange_hugle, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Nov 26 11:37:58 compute-0 systemd[1]: Started libpod-conmon-6de522ef25266012e4a5337a9cd0fc3a86d94aba3b1de12726330f6fe19beb7a.scope.
Nov 26 11:37:58 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:37:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f67ac17ab06d9d9561c8eccea7f32ceeedf68622ae218652079225aa371511d5/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 11:37:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f67ac17ab06d9d9561c8eccea7f32ceeedf68622ae218652079225aa371511d5/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Nov 26 11:37:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f67ac17ab06d9d9561c8eccea7f32ceeedf68622ae218652079225aa371511d5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:37:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f67ac17ab06d9d9561c8eccea7f32ceeedf68622ae218652079225aa371511d5/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 26 11:37:58 compute-0 podman[74327]: 2025-11-26 11:37:58.1139047 +0000 UTC m=+0.076973901 container init 6de522ef25266012e4a5337a9cd0fc3a86d94aba3b1de12726330f6fe19beb7a (image=quay.io/ceph/ceph:v18, name=strange_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:37:58 compute-0 podman[74327]: 2025-11-26 11:37:58.118116615 +0000 UTC m=+0.081185796 container start 6de522ef25266012e4a5337a9cd0fc3a86d94aba3b1de12726330f6fe19beb7a (image=quay.io/ceph/ceph:v18, name=strange_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 26 11:37:58 compute-0 podman[74327]: 2025-11-26 11:37:58.119457294 +0000 UTC m=+0.082526474 container attach 6de522ef25266012e4a5337a9cd0fc3a86d94aba3b1de12726330f6fe19beb7a (image=quay.io/ceph/ceph:v18, name=strange_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:37:58 compute-0 podman[74327]: 2025-11-26 11:37:58.05185925 +0000 UTC m=+0.014928451 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:37:58 compute-0 systemd[1]: libpod-6de522ef25266012e4a5337a9cd0fc3a86d94aba3b1de12726330f6fe19beb7a.scope: Deactivated successfully.
Nov 26 11:37:58 compute-0 conmon[74341]: conmon 6de522ef25266012e4a5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6de522ef25266012e4a5337a9cd0fc3a86d94aba3b1de12726330f6fe19beb7a.scope/container/memory.events
Nov 26 11:37:58 compute-0 podman[74327]: 2025-11-26 11:37:58.158845965 +0000 UTC m=+0.121915146 container died 6de522ef25266012e4a5337a9cd0fc3a86d94aba3b1de12726330f6fe19beb7a (image=quay.io/ceph/ceph:v18, name=strange_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 26 11:37:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-f67ac17ab06d9d9561c8eccea7f32ceeedf68622ae218652079225aa371511d5-merged.mount: Deactivated successfully.
Nov 26 11:37:58 compute-0 podman[74327]: 2025-11-26 11:37:58.175142085 +0000 UTC m=+0.138211266 container remove 6de522ef25266012e4a5337a9cd0fc3a86d94aba3b1de12726330f6fe19beb7a (image=quay.io/ceph/ceph:v18, name=strange_hugle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 26 11:37:58 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 11:37:58 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 11:37:58 compute-0 systemd[1]: libpod-conmon-6de522ef25266012e4a5337a9cd0fc3a86d94aba3b1de12726330f6fe19beb7a.scope: Deactivated successfully.
Nov 26 11:37:58 compute-0 systemd[1]: Reloading.
Nov 26 11:37:58 compute-0 systemd-sysv-generator[74403]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:37:58 compute-0 systemd-rc-local-generator[74400]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:37:58 compute-0 systemd[1]: Reloading.
Nov 26 11:37:58 compute-0 systemd-rc-local-generator[74436]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:37:58 compute-0 systemd-sysv-generator[74439]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:37:58 compute-0 systemd[1]: Reached target All Ceph clusters and services.
Nov 26 11:37:58 compute-0 systemd[1]: Reloading.
Nov 26 11:37:58 compute-0 systemd-rc-local-generator[74474]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:37:58 compute-0 systemd-sysv-generator[74477]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:37:58 compute-0 systemd[1]: Reached target Ceph cluster ebab460c-3fd7-5f66-aa87-e10c143123f7.
Nov 26 11:37:58 compute-0 systemd[1]: Reloading.
Nov 26 11:37:58 compute-0 systemd-rc-local-generator[74511]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:37:58 compute-0 systemd-sysv-generator[74515]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:37:58 compute-0 systemd[1]: Reloading.
Nov 26 11:37:59 compute-0 systemd-sysv-generator[74557]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:37:59 compute-0 systemd-rc-local-generator[74553]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:37:59 compute-0 systemd[1]: Created slice Slice /system/ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7.
Nov 26 11:37:59 compute-0 systemd[1]: Reached target System Time Set.
Nov 26 11:37:59 compute-0 systemd[1]: Reached target System Time Synchronized.
Nov 26 11:37:59 compute-0 systemd[1]: Starting Ceph mon.compute-0 for ebab460c-3fd7-5f66-aa87-e10c143123f7...
Nov 26 11:37:59 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 11:37:59 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 11:37:59 compute-0 podman[74607]: 2025-11-26 11:37:59.360647659 +0000 UTC m=+0.028587526 container create 50d420c569a0b86b5d40d72d0eb8ce25fa506ffc429712baa62f64a728717293 (image=quay.io/ceph/ceph:v18, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mon-compute-0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:37:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1655971910a1471c97416f8d098a7e9b444fd1ebfbd2eecb9e1601df5dc9914/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:37:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1655971910a1471c97416f8d098a7e9b444fd1ebfbd2eecb9e1601df5dc9914/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:37:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1655971910a1471c97416f8d098a7e9b444fd1ebfbd2eecb9e1601df5dc9914/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:37:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1655971910a1471c97416f8d098a7e9b444fd1ebfbd2eecb9e1601df5dc9914/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 26 11:37:59 compute-0 podman[74607]: 2025-11-26 11:37:59.401108091 +0000 UTC m=+0.069047980 container init 50d420c569a0b86b5d40d72d0eb8ce25fa506ffc429712baa62f64a728717293 (image=quay.io/ceph/ceph:v18, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mon-compute-0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:37:59 compute-0 podman[74607]: 2025-11-26 11:37:59.405454971 +0000 UTC m=+0.073394839 container start 50d420c569a0b86b5d40d72d0eb8ce25fa506ffc429712baa62f64a728717293 (image=quay.io/ceph/ceph:v18, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 26 11:37:59 compute-0 bash[74607]: 50d420c569a0b86b5d40d72d0eb8ce25fa506ffc429712baa62f64a728717293
Nov 26 11:37:59 compute-0 podman[74607]: 2025-11-26 11:37:59.349106718 +0000 UTC m=+0.017046607 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:37:59 compute-0 systemd[1]: Started Ceph mon.compute-0 for ebab460c-3fd7-5f66-aa87-e10c143123f7.
Nov 26 11:37:59 compute-0 ceph-mon[74623]: set uid:gid to 167:167 (ceph:ceph)
Nov 26 11:37:59 compute-0 ceph-mon[74623]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Nov 26 11:37:59 compute-0 ceph-mon[74623]: pidfile_write: ignore empty --pid-file
Nov 26 11:37:59 compute-0 ceph-mon[74623]: load: jerasure load: lrc 
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb: RocksDB version: 7.9.2
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb: Git sha 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb: DB SUMMARY
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb: DB Session ID:  DODGTBS60WSEBN5BT22N
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb: CURRENT file:  CURRENT
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb: IDENTITY file:  IDENTITY
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                         Options.error_if_exists: 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                       Options.create_if_missing: 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                         Options.paranoid_checks: 1
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                                     Options.env: 0x56405273fc40
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                                      Options.fs: PosixFileSystem
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                                Options.info_log: 0x564053df6e80
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                Options.max_file_opening_threads: 16
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                              Options.statistics: (nil)
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                               Options.use_fsync: 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                       Options.max_log_file_size: 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                         Options.allow_fallocate: 1
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                        Options.use_direct_reads: 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:          Options.create_missing_column_families: 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                              Options.db_log_dir: 
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                                 Options.wal_dir: 
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                   Options.advise_random_on_open: 1
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                    Options.write_buffer_manager: 0x564053e06b40
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                            Options.rate_limiter: (nil)
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                  Options.unordered_write: 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                               Options.row_cache: None
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                              Options.wal_filter: None
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:             Options.allow_ingest_behind: 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:             Options.two_write_queues: 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:             Options.manual_wal_flush: 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:             Options.wal_compression: 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:             Options.atomic_flush: 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                 Options.log_readahead_size: 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:             Options.allow_data_in_errors: 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:             Options.db_host_id: __hostname__
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:             Options.max_background_jobs: 2
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:             Options.max_background_compactions: -1
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:             Options.max_subcompactions: 1
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:             Options.max_total_wal_size: 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                          Options.max_open_files: -1
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                          Options.bytes_per_sync: 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:       Options.compaction_readahead_size: 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                  Options.max_background_flushes: -1
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb: Compression algorithms supported:
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:         kZSTD supported: 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:         kXpressCompression supported: 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:         kBZip2Compression supported: 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:         kLZ4Compression supported: 1
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:         kZlibCompression supported: 1
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:         kLZ4HCCompression supported: 1
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:         kSnappyCompression supported: 1
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:           Options.merge_operator: 
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:        Options.compaction_filter: None
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564053df6a80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x564053def1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:        Options.write_buffer_size: 33554432
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:  Options.max_write_buffer_number: 2
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:          Options.compression: NoCompression
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:             Options.num_levels: 7
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                           Options.bloom_locality: 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                               Options.ttl: 2592000
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                       Options.enable_blob_files: false
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                           Options.min_blob_size: 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 363c2a1d-8d28-40b7-a8ff-7233f1c9b7d5
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764157079434883, "job": 1, "event": "recovery_started", "wal_files": [4]}
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764157079435610, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764157079, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "363c2a1d-8d28-40b7-a8ff-7233f1c9b7d5", "db_session_id": "DODGTBS60WSEBN5BT22N", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764157079435713, "job": 1, "event": "recovery_finished"}
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x564053e18e00
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb: DB pointer 0x564053ea2000
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 11:37:59 compute-0 ceph-mon[74623]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      2.6      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      2.6      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      2.6      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      2.6      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.35 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.35 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564053def1f0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 26 11:37:59 compute-0 ceph-mon[74623]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid ebab460c-3fd7-5f66-aa87-e10c143123f7
Nov 26 11:37:59 compute-0 ceph-mon[74623]: mon.compute-0@-1(???) e0 preinit fsid ebab460c-3fd7-5f66-aa87-e10c143123f7
Nov 26 11:37:59 compute-0 ceph-mon[74623]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Nov 26 11:37:59 compute-0 ceph-mon[74623]: mon.compute-0@0(probing) e0 win_standalone_election
Nov 26 11:37:59 compute-0 ceph-mon[74623]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Nov 26 11:37:59 compute-0 ceph-mon[74623]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 26 11:37:59 compute-0 ceph-mon[74623]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 26 11:37:59 compute-0 ceph-mon[74623]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Nov 26 11:37:59 compute-0 ceph-mon[74623]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Nov 26 11:37:59 compute-0 ceph-mon[74623]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Nov 26 11:37:59 compute-0 ceph-mon[74623]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Nov 26 11:37:59 compute-0 ceph-mon[74623]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 26 11:37:59 compute-0 ceph-mon[74623]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Nov 26 11:37:59 compute-0 ceph-mon[74623]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: mon.compute-0@0(probing) e1 win_standalone_election
Nov 26 11:37:59 compute-0 ceph-mon[74623]: paxos.0).electionLogic(2) init, last seen epoch 2
Nov 26 11:37:59 compute-0 ceph-mon[74623]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 26 11:37:59 compute-0 ceph-mon[74623]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 26 11:37:59 compute-0 ceph-mon[74623]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 26 11:37:59 compute-0 ceph-mon[74623]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 26 11:37:59 compute-0 ceph-mon[74623]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v18,cpu=AMD EPYC 7763 64-Core Processor,created_at=2025-11-26T11:37:58.143302Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:04:00.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025,kernel_version=5.14.0-642.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7865360,os=Linux}
Nov 26 11:37:59 compute-0 ceph-mon[74623]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Nov 26 11:37:59 compute-0 ceph-mon[74623]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Nov 26 11:37:59 compute-0 ceph-mon[74623]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Nov 26 11:37:59 compute-0 ceph-mon[74623]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Nov 26 11:37:59 compute-0 ceph-mon[74623]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 26 11:37:59 compute-0 ceph-mon[74623]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Nov 26 11:37:59 compute-0 ceph-mon[74623]: mon.compute-0@0(leader).mds e1 new map
Nov 26 11:37:59 compute-0 ceph-mon[74623]: mon.compute-0@0(leader).mds e1 print_map
                                           e1
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Nov 26 11:37:59 compute-0 ceph-mon[74623]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Nov 26 11:37:59 compute-0 ceph-mon[74623]: log_channel(cluster) log [DBG] : fsmap 
Nov 26 11:37:59 compute-0 ceph-mon[74623]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Nov 26 11:37:59 compute-0 ceph-mon[74623]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Nov 26 11:37:59 compute-0 ceph-mon[74623]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Nov 26 11:37:59 compute-0 ceph-mon[74623]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Nov 26 11:37:59 compute-0 ceph-mon[74623]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 26 11:37:59 compute-0 ceph-mon[74623]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 26 11:37:59 compute-0 ceph-mon[74623]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 26 11:37:59 compute-0 ceph-mon[74623]: mkfs ebab460c-3fd7-5f66-aa87-e10c143123f7
Nov 26 11:37:59 compute-0 ceph-mon[74623]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Nov 26 11:37:59 compute-0 ceph-mon[74623]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Nov 26 11:37:59 compute-0 podman[74624]: 2025-11-26 11:37:59.459597494 +0000 UTC m=+0.032414797 container create a17be52e432e8b6e95bac23caafd2ed39f7eadc0fc6d880c44265d4c7528841c (image=quay.io/ceph/ceph:v18, name=condescending_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 26 11:37:59 compute-0 ceph-mon[74623]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Nov 26 11:37:59 compute-0 ceph-mon[74623]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 26 11:37:59 compute-0 systemd[1]: Started libpod-conmon-a17be52e432e8b6e95bac23caafd2ed39f7eadc0fc6d880c44265d4c7528841c.scope.
Nov 26 11:37:59 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:37:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bac26dfea51bfab7ff7da76e86f12536f5ad1b4be4d83bc75c38ba6d4bd6ce04/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:37:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bac26dfea51bfab7ff7da76e86f12536f5ad1b4be4d83bc75c38ba6d4bd6ce04/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 11:37:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bac26dfea51bfab7ff7da76e86f12536f5ad1b4be4d83bc75c38ba6d4bd6ce04/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 26 11:37:59 compute-0 podman[74624]: 2025-11-26 11:37:59.522963885 +0000 UTC m=+0.095781188 container init a17be52e432e8b6e95bac23caafd2ed39f7eadc0fc6d880c44265d4c7528841c (image=quay.io/ceph/ceph:v18, name=condescending_yonath, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 26 11:37:59 compute-0 podman[74624]: 2025-11-26 11:37:59.527593397 +0000 UTC m=+0.100410701 container start a17be52e432e8b6e95bac23caafd2ed39f7eadc0fc6d880c44265d4c7528841c (image=quay.io/ceph/ceph:v18, name=condescending_yonath, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 26 11:37:59 compute-0 podman[74624]: 2025-11-26 11:37:59.529138583 +0000 UTC m=+0.101955886 container attach a17be52e432e8b6e95bac23caafd2ed39f7eadc0fc6d880c44265d4c7528841c (image=quay.io/ceph/ceph:v18, name=condescending_yonath, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 26 11:37:59 compute-0 podman[74624]: 2025-11-26 11:37:59.448004625 +0000 UTC m=+0.020821948 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:37:59 compute-0 ceph-mon[74623]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 26 11:37:59 compute-0 ceph-mon[74623]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1132235568' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 26 11:37:59 compute-0 condescending_yonath[74676]:   cluster:
Nov 26 11:37:59 compute-0 condescending_yonath[74676]:     id:     ebab460c-3fd7-5f66-aa87-e10c143123f7
Nov 26 11:37:59 compute-0 condescending_yonath[74676]:     health: HEALTH_OK
Nov 26 11:37:59 compute-0 condescending_yonath[74676]:  
Nov 26 11:37:59 compute-0 condescending_yonath[74676]:   services:
Nov 26 11:37:59 compute-0 condescending_yonath[74676]:     mon: 1 daemons, quorum compute-0 (age 0.391724s)
Nov 26 11:37:59 compute-0 condescending_yonath[74676]:     mgr: no daemons active
Nov 26 11:37:59 compute-0 condescending_yonath[74676]:     osd: 0 osds: 0 up, 0 in
Nov 26 11:37:59 compute-0 condescending_yonath[74676]:  
Nov 26 11:37:59 compute-0 condescending_yonath[74676]:   data:
Nov 26 11:37:59 compute-0 condescending_yonath[74676]:     pools:   0 pools, 0 pgs
Nov 26 11:37:59 compute-0 condescending_yonath[74676]:     objects: 0 objects, 0 B
Nov 26 11:37:59 compute-0 condescending_yonath[74676]:     usage:   0 B used, 0 B / 0 B avail
Nov 26 11:37:59 compute-0 condescending_yonath[74676]:     pgs:     
Nov 26 11:37:59 compute-0 condescending_yonath[74676]:  
Nov 26 11:37:59 compute-0 systemd[1]: libpod-a17be52e432e8b6e95bac23caafd2ed39f7eadc0fc6d880c44265d4c7528841c.scope: Deactivated successfully.
Nov 26 11:37:59 compute-0 podman[74624]: 2025-11-26 11:37:59.857001519 +0000 UTC m=+0.429818821 container died a17be52e432e8b6e95bac23caafd2ed39f7eadc0fc6d880c44265d4c7528841c (image=quay.io/ceph/ceph:v18, name=condescending_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 26 11:37:59 compute-0 podman[74624]: 2025-11-26 11:37:59.880984498 +0000 UTC m=+0.453801801 container remove a17be52e432e8b6e95bac23caafd2ed39f7eadc0fc6d880c44265d4c7528841c (image=quay.io/ceph/ceph:v18, name=condescending_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:37:59 compute-0 systemd[1]: libpod-conmon-a17be52e432e8b6e95bac23caafd2ed39f7eadc0fc6d880c44265d4c7528841c.scope: Deactivated successfully.
Nov 26 11:37:59 compute-0 podman[74711]: 2025-11-26 11:37:59.922152826 +0000 UTC m=+0.026719695 container create df1af60b9e9c58192120d087ddde2d41f48526730b46482856cdaec5170c01a1 (image=quay.io/ceph/ceph:v18, name=festive_wright, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 26 11:37:59 compute-0 systemd[1]: Started libpod-conmon-df1af60b9e9c58192120d087ddde2d41f48526730b46482856cdaec5170c01a1.scope.
Nov 26 11:37:59 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:37:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c09efb26be677a8bb68b8362422fa360fca0e862b58b154991a3f17ee631c44/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 11:37:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c09efb26be677a8bb68b8362422fa360fca0e862b58b154991a3f17ee631c44/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:37:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c09efb26be677a8bb68b8362422fa360fca0e862b58b154991a3f17ee631c44/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:37:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c09efb26be677a8bb68b8362422fa360fca0e862b58b154991a3f17ee631c44/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 26 11:37:59 compute-0 podman[74711]: 2025-11-26 11:37:59.974293322 +0000 UTC m=+0.078860200 container init df1af60b9e9c58192120d087ddde2d41f48526730b46482856cdaec5170c01a1 (image=quay.io/ceph/ceph:v18, name=festive_wright, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 26 11:37:59 compute-0 podman[74711]: 2025-11-26 11:37:59.97884611 +0000 UTC m=+0.083412979 container start df1af60b9e9c58192120d087ddde2d41f48526730b46482856cdaec5170c01a1 (image=quay.io/ceph/ceph:v18, name=festive_wright, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:37:59 compute-0 podman[74711]: 2025-11-26 11:37:59.982953208 +0000 UTC m=+0.087520076 container attach df1af60b9e9c58192120d087ddde2d41f48526730b46482856cdaec5170c01a1 (image=quay.io/ceph/ceph:v18, name=festive_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:38:00 compute-0 podman[74711]: 2025-11-26 11:37:59.910657743 +0000 UTC m=+0.015224631 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:38:00 compute-0 ceph-mon[74623]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Nov 26 11:38:00 compute-0 ceph-mon[74623]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/294949328' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 26 11:38:00 compute-0 ceph-mon[74623]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/294949328' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 26 11:38:00 compute-0 festive_wright[74724]: 
Nov 26 11:38:00 compute-0 festive_wright[74724]: [global]
Nov 26 11:38:00 compute-0 festive_wright[74724]:         fsid = ebab460c-3fd7-5f66-aa87-e10c143123f7
Nov 26 11:38:00 compute-0 festive_wright[74724]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Nov 26 11:38:00 compute-0 festive_wright[74724]:         osd_crush_chooseleaf_type = 0
Nov 26 11:38:00 compute-0 systemd[1]: libpod-df1af60b9e9c58192120d087ddde2d41f48526730b46482856cdaec5170c01a1.scope: Deactivated successfully.
Nov 26 11:38:00 compute-0 podman[74750]: 2025-11-26 11:38:00.329812577 +0000 UTC m=+0.015356990 container died df1af60b9e9c58192120d087ddde2d41f48526730b46482856cdaec5170c01a1 (image=quay.io/ceph/ceph:v18, name=festive_wright, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 26 11:38:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c09efb26be677a8bb68b8362422fa360fca0e862b58b154991a3f17ee631c44-merged.mount: Deactivated successfully.
Nov 26 11:38:00 compute-0 podman[74750]: 2025-11-26 11:38:00.349687569 +0000 UTC m=+0.035231981 container remove df1af60b9e9c58192120d087ddde2d41f48526730b46482856cdaec5170c01a1 (image=quay.io/ceph/ceph:v18, name=festive_wright, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:38:00 compute-0 systemd[1]: libpod-conmon-df1af60b9e9c58192120d087ddde2d41f48526730b46482856cdaec5170c01a1.scope: Deactivated successfully.
Nov 26 11:38:00 compute-0 podman[74762]: 2025-11-26 11:38:00.395565451 +0000 UTC m=+0.028913642 container create cc0095b67fbee2728bd743df12c575388dd1ce32bfc4a22ffca0c75fb088f637 (image=quay.io/ceph/ceph:v18, name=upbeat_thompson, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:38:00 compute-0 systemd[1]: Started libpod-conmon-cc0095b67fbee2728bd743df12c575388dd1ce32bfc4a22ffca0c75fb088f637.scope.
Nov 26 11:38:00 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:38:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9d5e3b0f02df6a0636d4054023b10e02146ec7f20792d7397995922952ae085/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9d5e3b0f02df6a0636d4054023b10e02146ec7f20792d7397995922952ae085/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9d5e3b0f02df6a0636d4054023b10e02146ec7f20792d7397995922952ae085/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9d5e3b0f02df6a0636d4054023b10e02146ec7f20792d7397995922952ae085/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:00 compute-0 podman[74762]: 2025-11-26 11:38:00.437201822 +0000 UTC m=+0.070550033 container init cc0095b67fbee2728bd743df12c575388dd1ce32bfc4a22ffca0c75fb088f637 (image=quay.io/ceph/ceph:v18, name=upbeat_thompson, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:38:00 compute-0 podman[74762]: 2025-11-26 11:38:00.441667005 +0000 UTC m=+0.075015195 container start cc0095b67fbee2728bd743df12c575388dd1ce32bfc4a22ffca0c75fb088f637 (image=quay.io/ceph/ceph:v18, name=upbeat_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 26 11:38:00 compute-0 podman[74762]: 2025-11-26 11:38:00.442925549 +0000 UTC m=+0.076273740 container attach cc0095b67fbee2728bd743df12c575388dd1ce32bfc4a22ffca0c75fb088f637 (image=quay.io/ceph/ceph:v18, name=upbeat_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:38:00 compute-0 ceph-mon[74623]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 26 11:38:00 compute-0 ceph-mon[74623]: monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 26 11:38:00 compute-0 ceph-mon[74623]: fsmap 
Nov 26 11:38:00 compute-0 ceph-mon[74623]: osdmap e1: 0 total, 0 up, 0 in
Nov 26 11:38:00 compute-0 ceph-mon[74623]: mgrmap e1: no daemons active
Nov 26 11:38:00 compute-0 ceph-mon[74623]: from='client.? 192.168.122.100:0/1132235568' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 26 11:38:00 compute-0 ceph-mon[74623]: from='client.? 192.168.122.100:0/294949328' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 26 11:38:00 compute-0 ceph-mon[74623]: from='client.? 192.168.122.100:0/294949328' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 26 11:38:00 compute-0 podman[74762]: 2025-11-26 11:38:00.384715143 +0000 UTC m=+0.018063334 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:38:00 compute-0 ceph-mon[74623]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 11:38:00 compute-0 ceph-mon[74623]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2094338436' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:38:00 compute-0 systemd[1]: libpod-cc0095b67fbee2728bd743df12c575388dd1ce32bfc4a22ffca0c75fb088f637.scope: Deactivated successfully.
Nov 26 11:38:00 compute-0 podman[74762]: 2025-11-26 11:38:00.768606178 +0000 UTC m=+0.401954369 container died cc0095b67fbee2728bd743df12c575388dd1ce32bfc4a22ffca0c75fb088f637 (image=quay.io/ceph/ceph:v18, name=upbeat_thompson, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 26 11:38:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-f9d5e3b0f02df6a0636d4054023b10e02146ec7f20792d7397995922952ae085-merged.mount: Deactivated successfully.
Nov 26 11:38:00 compute-0 podman[74762]: 2025-11-26 11:38:00.788405848 +0000 UTC m=+0.421754039 container remove cc0095b67fbee2728bd743df12c575388dd1ce32bfc4a22ffca0c75fb088f637 (image=quay.io/ceph/ceph:v18, name=upbeat_thompson, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:38:00 compute-0 systemd[1]: Stopping Ceph mon.compute-0 for ebab460c-3fd7-5f66-aa87-e10c143123f7...
Nov 26 11:38:00 compute-0 systemd[1]: libpod-conmon-cc0095b67fbee2728bd743df12c575388dd1ce32bfc4a22ffca0c75fb088f637.scope: Deactivated successfully.
Nov 26 11:38:00 compute-0 ceph-mon[74623]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Nov 26 11:38:00 compute-0 ceph-mon[74623]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Nov 26 11:38:00 compute-0 ceph-mon[74623]: mon.compute-0@0(leader) e1 shutdown
Nov 26 11:38:00 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mon-compute-0[74619]: 2025-11-26T11:38:00.912+0000 7f5e93e65640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Nov 26 11:38:00 compute-0 ceph-mon[74623]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 26 11:38:00 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mon-compute-0[74619]: 2025-11-26T11:38:00.912+0000 7f5e93e65640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Nov 26 11:38:00 compute-0 ceph-mon[74623]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 26 11:38:01 compute-0 podman[74831]: 2025-11-26 11:38:01.157392346 +0000 UTC m=+0.266524700 container died 50d420c569a0b86b5d40d72d0eb8ce25fa506ffc429712baa62f64a728717293 (image=quay.io/ceph/ceph:v18, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:38:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-f1655971910a1471c97416f8d098a7e9b444fd1ebfbd2eecb9e1601df5dc9914-merged.mount: Deactivated successfully.
Nov 26 11:38:01 compute-0 podman[74831]: 2025-11-26 11:38:01.174061179 +0000 UTC m=+0.283193534 container remove 50d420c569a0b86b5d40d72d0eb8ce25fa506ffc429712baa62f64a728717293 (image=quay.io/ceph/ceph:v18, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mon-compute-0, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:38:01 compute-0 bash[74831]: ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mon-compute-0
Nov 26 11:38:01 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 11:38:01 compute-0 systemd[1]: ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7@mon.compute-0.service: Deactivated successfully.
Nov 26 11:38:01 compute-0 systemd[1]: Stopped Ceph mon.compute-0 for ebab460c-3fd7-5f66-aa87-e10c143123f7.
Nov 26 11:38:01 compute-0 systemd[1]: Starting Ceph mon.compute-0 for ebab460c-3fd7-5f66-aa87-e10c143123f7...
Nov 26 11:38:01 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 11:38:01 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 11:38:01 compute-0 podman[74912]: 2025-11-26 11:38:01.410995583 +0000 UTC m=+0.027464549 container create 810eaed6cbde00330c0646cedaef5c2ae94579236d67ba42b8ef02d055d04ad5 (image=quay.io/ceph/ceph:v18, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:38:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bd17c52ab315e0cb93149941f32d2c8fbe27bf909963d6422b4e1b72f166aae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bd17c52ab315e0cb93149941f32d2c8fbe27bf909963d6422b4e1b72f166aae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bd17c52ab315e0cb93149941f32d2c8fbe27bf909963d6422b4e1b72f166aae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bd17c52ab315e0cb93149941f32d2c8fbe27bf909963d6422b4e1b72f166aae/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:01 compute-0 podman[74912]: 2025-11-26 11:38:01.447733924 +0000 UTC m=+0.064202901 container init 810eaed6cbde00330c0646cedaef5c2ae94579236d67ba42b8ef02d055d04ad5 (image=quay.io/ceph/ceph:v18, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mon-compute-0, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 26 11:38:01 compute-0 podman[74912]: 2025-11-26 11:38:01.453222297 +0000 UTC m=+0.069691264 container start 810eaed6cbde00330c0646cedaef5c2ae94579236d67ba42b8ef02d055d04ad5 (image=quay.io/ceph/ceph:v18, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mon-compute-0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 26 11:38:01 compute-0 bash[74912]: 810eaed6cbde00330c0646cedaef5c2ae94579236d67ba42b8ef02d055d04ad5
Nov 26 11:38:01 compute-0 podman[74912]: 2025-11-26 11:38:01.399878832 +0000 UTC m=+0.016347819 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:38:01 compute-0 systemd[1]: Started Ceph mon.compute-0 for ebab460c-3fd7-5f66-aa87-e10c143123f7.
Nov 26 11:38:01 compute-0 ceph-mon[74928]: set uid:gid to 167:167 (ceph:ceph)
Nov 26 11:38:01 compute-0 ceph-mon[74928]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Nov 26 11:38:01 compute-0 ceph-mon[74928]: pidfile_write: ignore empty --pid-file
Nov 26 11:38:01 compute-0 ceph-mon[74928]: load: jerasure load: lrc 
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb: RocksDB version: 7.9.2
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb: Git sha 0
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb: DB SUMMARY
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb: DB Session ID:  CJT49RLFB1C6KNYXG0ER
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb: CURRENT file:  CURRENT
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb: IDENTITY file:  IDENTITY
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 54278 ; 
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                         Options.error_if_exists: 0
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                       Options.create_if_missing: 0
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                         Options.paranoid_checks: 1
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                                     Options.env: 0x557bd3774c40
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                                      Options.fs: PosixFileSystem
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                                Options.info_log: 0x557bd53fb040
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                Options.max_file_opening_threads: 16
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                              Options.statistics: (nil)
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                               Options.use_fsync: 0
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                       Options.max_log_file_size: 0
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                         Options.allow_fallocate: 1
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                        Options.use_direct_reads: 0
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:          Options.create_missing_column_families: 0
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                              Options.db_log_dir: 
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                                 Options.wal_dir: 
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                   Options.advise_random_on_open: 1
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                    Options.write_buffer_manager: 0x557bd540ab40
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                            Options.rate_limiter: (nil)
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                  Options.unordered_write: 0
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                               Options.row_cache: None
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                              Options.wal_filter: None
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:             Options.allow_ingest_behind: 0
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:             Options.two_write_queues: 0
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:             Options.manual_wal_flush: 0
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:             Options.wal_compression: 0
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:             Options.atomic_flush: 0
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                 Options.log_readahead_size: 0
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:             Options.allow_data_in_errors: 0
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:             Options.db_host_id: __hostname__
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:             Options.max_background_jobs: 2
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:             Options.max_background_compactions: -1
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:             Options.max_subcompactions: 1
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:             Options.max_total_wal_size: 0
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                          Options.max_open_files: -1
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                          Options.bytes_per_sync: 0
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:       Options.compaction_readahead_size: 0
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                  Options.max_background_flushes: -1
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb: Compression algorithms supported:
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:         kZSTD supported: 0
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:         kXpressCompression supported: 0
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:         kBZip2Compression supported: 0
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:         kLZ4Compression supported: 1
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:         kZlibCompression supported: 1
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:         kLZ4HCCompression supported: 1
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:         kSnappyCompression supported: 1
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:           Options.merge_operator: 
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:        Options.compaction_filter: None
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557bd53fac40)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557bd53f31f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:        Options.write_buffer_size: 33554432
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:  Options.max_write_buffer_number: 2
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:          Options.compression: NoCompression
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:             Options.num_levels: 7
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                           Options.bloom_locality: 0
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                               Options.ttl: 2592000
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                       Options.enable_blob_files: false
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                           Options.min_blob_size: 0
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 363c2a1d-8d28-40b7-a8ff-7233f1c9b7d5
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764157081484933, "job": 1, "event": "recovery_started", "wal_files": [9]}
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764157081486001, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 53978, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 131, "table_properties": {"data_size": 52537, "index_size": 147, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 261, "raw_key_size": 2994, "raw_average_key_size": 30, "raw_value_size": 50184, "raw_average_value_size": 512, "num_data_blocks": 7, "num_entries": 98, "num_filter_entries": 98, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764157081, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "363c2a1d-8d28-40b7-a8ff-7233f1c9b7d5", "db_session_id": "CJT49RLFB1C6KNYXG0ER", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764157081486077, "job": 1, "event": "recovery_finished"}
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x557bd541ce00
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb: DB pointer 0x557bd5524000
Nov 26 11:38:01 compute-0 ceph-mon[74928]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid ebab460c-3fd7-5f66-aa87-e10c143123f7
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 11:38:01 compute-0 ceph-mon[74928]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0   54.61 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     62.5      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      2/0   54.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     62.5      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     62.5      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     62.5      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 5.71 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 5.71 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557bd53f31f0#2 capacity: 512.00 MB usage: 25.88 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,25.11 KB,0.00478923%) FilterBlock(2,0.42 KB,8.04663e-05%) IndexBlock(2,0.34 KB,6.55651e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 26 11:38:01 compute-0 ceph-mon[74928]: mon.compute-0@-1(???) e1 preinit fsid ebab460c-3fd7-5f66-aa87-e10c143123f7
Nov 26 11:38:01 compute-0 ceph-mon[74928]: mon.compute-0@-1(???).mds e1 new map
Nov 26 11:38:01 compute-0 ceph-mon[74928]: mon.compute-0@-1(???).mds e1 print_map
                                           e1
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Nov 26 11:38:01 compute-0 ceph-mon[74928]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Nov 26 11:38:01 compute-0 ceph-mon[74928]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 26 11:38:01 compute-0 ceph-mon[74928]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 26 11:38:01 compute-0 ceph-mon[74928]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 26 11:38:01 compute-0 ceph-mon[74928]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Nov 26 11:38:01 compute-0 ceph-mon[74928]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Nov 26 11:38:01 compute-0 ceph-mon[74928]: mon.compute-0@0(probing) e1 win_standalone_election
Nov 26 11:38:01 compute-0 ceph-mon[74928]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Nov 26 11:38:01 compute-0 ceph-mon[74928]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 26 11:38:01 compute-0 ceph-mon[74928]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 26 11:38:01 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 26 11:38:01 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 26 11:38:01 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : fsmap 
Nov 26 11:38:01 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Nov 26 11:38:01 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Nov 26 11:38:01 compute-0 podman[74929]: 2025-11-26 11:38:01.501059445 +0000 UTC m=+0.030303776 container create c005a2173209b97b681a69ab3319ae7291be194289fdba4952effcd66ab01cdf (image=quay.io/ceph/ceph:v18, name=intelligent_diffie, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 26 11:38:01 compute-0 systemd[1]: Started libpod-conmon-c005a2173209b97b681a69ab3319ae7291be194289fdba4952effcd66ab01cdf.scope.
Nov 26 11:38:01 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:38:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3263765ffbec7f20be9f594af695425f460afdd8ba98e6ad07e0e79317329ef7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3263765ffbec7f20be9f594af695425f460afdd8ba98e6ad07e0e79317329ef7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3263765ffbec7f20be9f594af695425f460afdd8ba98e6ad07e0e79317329ef7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:01 compute-0 ceph-mon[74928]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 26 11:38:01 compute-0 ceph-mon[74928]: monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 26 11:38:01 compute-0 ceph-mon[74928]: fsmap 
Nov 26 11:38:01 compute-0 ceph-mon[74928]: osdmap e1: 0 total, 0 up, 0 in
Nov 26 11:38:01 compute-0 ceph-mon[74928]: mgrmap e1: no daemons active
Nov 26 11:38:01 compute-0 podman[74929]: 2025-11-26 11:38:01.561057863 +0000 UTC m=+0.090302204 container init c005a2173209b97b681a69ab3319ae7291be194289fdba4952effcd66ab01cdf (image=quay.io/ceph/ceph:v18, name=intelligent_diffie, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 26 11:38:01 compute-0 podman[74929]: 2025-11-26 11:38:01.566881829 +0000 UTC m=+0.096126160 container start c005a2173209b97b681a69ab3319ae7291be194289fdba4952effcd66ab01cdf (image=quay.io/ceph/ceph:v18, name=intelligent_diffie, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 26 11:38:01 compute-0 podman[74929]: 2025-11-26 11:38:01.567982885 +0000 UTC m=+0.097227227 container attach c005a2173209b97b681a69ab3319ae7291be194289fdba4952effcd66ab01cdf (image=quay.io/ceph/ceph:v18, name=intelligent_diffie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:38:01 compute-0 podman[74929]: 2025-11-26 11:38:01.487727927 +0000 UTC m=+0.016972278 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:38:01 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0) v1
Nov 26 11:38:01 compute-0 systemd[1]: libpod-c005a2173209b97b681a69ab3319ae7291be194289fdba4952effcd66ab01cdf.scope: Deactivated successfully.
Nov 26 11:38:01 compute-0 conmon[74980]: conmon c005a2173209b97b681a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c005a2173209b97b681a69ab3319ae7291be194289fdba4952effcd66ab01cdf.scope/container/memory.events
Nov 26 11:38:01 compute-0 podman[74929]: 2025-11-26 11:38:01.89330042 +0000 UTC m=+0.422544761 container died c005a2173209b97b681a69ab3319ae7291be194289fdba4952effcd66ab01cdf (image=quay.io/ceph/ceph:v18, name=intelligent_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 26 11:38:01 compute-0 podman[74929]: 2025-11-26 11:38:01.915401921 +0000 UTC m=+0.444646252 container remove c005a2173209b97b681a69ab3319ae7291be194289fdba4952effcd66ab01cdf (image=quay.io/ceph/ceph:v18, name=intelligent_diffie, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:38:01 compute-0 systemd[1]: libpod-conmon-c005a2173209b97b681a69ab3319ae7291be194289fdba4952effcd66ab01cdf.scope: Deactivated successfully.
Nov 26 11:38:01 compute-0 podman[75016]: 2025-11-26 11:38:01.958265817 +0000 UTC m=+0.028722621 container create c0d37421436cdf5cfbb2f3a3052421057351ff8aa348bc3c7e4d6e34a557767f (image=quay.io/ceph/ceph:v18, name=jovial_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 26 11:38:01 compute-0 systemd[1]: Started libpod-conmon-c0d37421436cdf5cfbb2f3a3052421057351ff8aa348bc3c7e4d6e34a557767f.scope.
Nov 26 11:38:02 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:38:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6479b0f6dba503750d23b69c7340ce9e9a96bba39c411d534ca1eb2c1aa9b40/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6479b0f6dba503750d23b69c7340ce9e9a96bba39c411d534ca1eb2c1aa9b40/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6479b0f6dba503750d23b69c7340ce9e9a96bba39c411d534ca1eb2c1aa9b40/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:02 compute-0 podman[75016]: 2025-11-26 11:38:02.014254162 +0000 UTC m=+0.084710985 container init c0d37421436cdf5cfbb2f3a3052421057351ff8aa348bc3c7e4d6e34a557767f (image=quay.io/ceph/ceph:v18, name=jovial_tharp, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:38:02 compute-0 podman[75016]: 2025-11-26 11:38:02.018911607 +0000 UTC m=+0.089368410 container start c0d37421436cdf5cfbb2f3a3052421057351ff8aa348bc3c7e4d6e34a557767f (image=quay.io/ceph/ceph:v18, name=jovial_tharp, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 26 11:38:02 compute-0 podman[75016]: 2025-11-26 11:38:02.02002645 +0000 UTC m=+0.090483254 container attach c0d37421436cdf5cfbb2f3a3052421057351ff8aa348bc3c7e4d6e34a557767f (image=quay.io/ceph/ceph:v18, name=jovial_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:38:02 compute-0 podman[75016]: 2025-11-26 11:38:01.94625966 +0000 UTC m=+0.016716483 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:38:02 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0) v1
Nov 26 11:38:02 compute-0 systemd[1]: libpod-c0d37421436cdf5cfbb2f3a3052421057351ff8aa348bc3c7e4d6e34a557767f.scope: Deactivated successfully.
Nov 26 11:38:02 compute-0 podman[75016]: 2025-11-26 11:38:02.343099641 +0000 UTC m=+0.413556444 container died c0d37421436cdf5cfbb2f3a3052421057351ff8aa348bc3c7e4d6e34a557767f (image=quay.io/ceph/ceph:v18, name=jovial_tharp, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:38:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6479b0f6dba503750d23b69c7340ce9e9a96bba39c411d534ca1eb2c1aa9b40-merged.mount: Deactivated successfully.
Nov 26 11:38:02 compute-0 podman[75016]: 2025-11-26 11:38:02.368886925 +0000 UTC m=+0.439343728 container remove c0d37421436cdf5cfbb2f3a3052421057351ff8aa348bc3c7e4d6e34a557767f (image=quay.io/ceph/ceph:v18, name=jovial_tharp, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:38:02 compute-0 systemd[1]: libpod-conmon-c0d37421436cdf5cfbb2f3a3052421057351ff8aa348bc3c7e4d6e34a557767f.scope: Deactivated successfully.
Nov 26 11:38:02 compute-0 systemd[1]: Reloading.
Nov 26 11:38:02 compute-0 systemd-sysv-generator[75088]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:38:02 compute-0 systemd-rc-local-generator[75085]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:38:02 compute-0 systemd[1]: Reloading.
Nov 26 11:38:02 compute-0 systemd-rc-local-generator[75126]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:38:02 compute-0 systemd-sysv-generator[75131]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:38:02 compute-0 systemd[1]: Starting Ceph mgr.compute-0.mwrktr for ebab460c-3fd7-5f66-aa87-e10c143123f7...
Nov 26 11:38:02 compute-0 podman[75181]: 2025-11-26 11:38:02.928501276 +0000 UTC m=+0.025280740 container create bb7060fb261e5fe01e009c3b1dd454225180ac9f13a4e814ef1196fc9cf6ba57 (image=quay.io/ceph/ceph:v18, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-mwrktr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 26 11:38:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/886dc23a623dee97c76d949caa50a0c558d26a6f2c56989663ed3ec923838fe4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/886dc23a623dee97c76d949caa50a0c558d26a6f2c56989663ed3ec923838fe4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/886dc23a623dee97c76d949caa50a0c558d26a6f2c56989663ed3ec923838fe4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/886dc23a623dee97c76d949caa50a0c558d26a6f2c56989663ed3ec923838fe4/merged/var/lib/ceph/mgr/ceph-compute-0.mwrktr supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:02 compute-0 podman[75181]: 2025-11-26 11:38:02.970433595 +0000 UTC m=+0.067213068 container init bb7060fb261e5fe01e009c3b1dd454225180ac9f13a4e814ef1196fc9cf6ba57 (image=quay.io/ceph/ceph:v18, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-mwrktr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:38:02 compute-0 podman[75181]: 2025-11-26 11:38:02.973974534 +0000 UTC m=+0.070754008 container start bb7060fb261e5fe01e009c3b1dd454225180ac9f13a4e814ef1196fc9cf6ba57 (image=quay.io/ceph/ceph:v18, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-mwrktr, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 26 11:38:02 compute-0 bash[75181]: bb7060fb261e5fe01e009c3b1dd454225180ac9f13a4e814ef1196fc9cf6ba57
Nov 26 11:38:02 compute-0 podman[75181]: 2025-11-26 11:38:02.91729684 +0000 UTC m=+0.014076324 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:38:02 compute-0 systemd[1]: Started Ceph mgr.compute-0.mwrktr for ebab460c-3fd7-5f66-aa87-e10c143123f7.
Nov 26 11:38:03 compute-0 ceph-mgr[75197]: set uid:gid to 167:167 (ceph:ceph)
Nov 26 11:38:03 compute-0 ceph-mgr[75197]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 26 11:38:03 compute-0 ceph-mgr[75197]: pidfile_write: ignore empty --pid-file
Nov 26 11:38:03 compute-0 podman[75198]: 2025-11-26 11:38:03.020436078 +0000 UTC m=+0.026418466 container create c6f3518e1a3c60f255fe025352c8fe58335383f6b8ecaabf92128a01a835a557 (image=quay.io/ceph/ceph:v18, name=silly_moser, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:38:03 compute-0 systemd[1]: Started libpod-conmon-c6f3518e1a3c60f255fe025352c8fe58335383f6b8ecaabf92128a01a835a557.scope.
Nov 26 11:38:03 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:38:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9068b7ce21c793695b523858c75499087ef2fc1ca0b72b980aa3ae5bef26041/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9068b7ce21c793695b523858c75499087ef2fc1ca0b72b980aa3ae5bef26041/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9068b7ce21c793695b523858c75499087ef2fc1ca0b72b980aa3ae5bef26041/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:03 compute-0 podman[75198]: 2025-11-26 11:38:03.081960665 +0000 UTC m=+0.087943074 container init c6f3518e1a3c60f255fe025352c8fe58335383f6b8ecaabf92128a01a835a557 (image=quay.io/ceph/ceph:v18, name=silly_moser, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 26 11:38:03 compute-0 podman[75198]: 2025-11-26 11:38:03.086799953 +0000 UTC m=+0.092782352 container start c6f3518e1a3c60f255fe025352c8fe58335383f6b8ecaabf92128a01a835a557 (image=quay.io/ceph/ceph:v18, name=silly_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 26 11:38:03 compute-0 podman[75198]: 2025-11-26 11:38:03.088978573 +0000 UTC m=+0.094960982 container attach c6f3518e1a3c60f255fe025352c8fe58335383f6b8ecaabf92128a01a835a557 (image=quay.io/ceph/ceph:v18, name=silly_moser, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 26 11:38:03 compute-0 ceph-mgr[75197]: mgr[py] Loading python module 'alerts'
Nov 26 11:38:03 compute-0 podman[75198]: 2025-11-26 11:38:03.010896333 +0000 UTC m=+0.016878752 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:38:03 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-mwrktr[75193]: 2025-11-26T11:38:03.372+0000 7f41bcb38140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 26 11:38:03 compute-0 ceph-mgr[75197]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 26 11:38:03 compute-0 ceph-mgr[75197]: mgr[py] Loading python module 'balancer'
Nov 26 11:38:03 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 26 11:38:03 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3279887407' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 26 11:38:03 compute-0 silly_moser[75236]: 
Nov 26 11:38:03 compute-0 silly_moser[75236]: {
Nov 26 11:38:03 compute-0 silly_moser[75236]:     "fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:38:03 compute-0 silly_moser[75236]:     "health": {
Nov 26 11:38:03 compute-0 silly_moser[75236]:         "status": "HEALTH_OK",
Nov 26 11:38:03 compute-0 silly_moser[75236]:         "checks": {},
Nov 26 11:38:03 compute-0 silly_moser[75236]:         "mutes": []
Nov 26 11:38:03 compute-0 silly_moser[75236]:     },
Nov 26 11:38:03 compute-0 silly_moser[75236]:     "election_epoch": 5,
Nov 26 11:38:03 compute-0 silly_moser[75236]:     "quorum": [
Nov 26 11:38:03 compute-0 silly_moser[75236]:         0
Nov 26 11:38:03 compute-0 silly_moser[75236]:     ],
Nov 26 11:38:03 compute-0 silly_moser[75236]:     "quorum_names": [
Nov 26 11:38:03 compute-0 silly_moser[75236]:         "compute-0"
Nov 26 11:38:03 compute-0 silly_moser[75236]:     ],
Nov 26 11:38:03 compute-0 silly_moser[75236]:     "quorum_age": 1,
Nov 26 11:38:03 compute-0 silly_moser[75236]:     "monmap": {
Nov 26 11:38:03 compute-0 silly_moser[75236]:         "epoch": 1,
Nov 26 11:38:03 compute-0 silly_moser[75236]:         "min_mon_release_name": "reef",
Nov 26 11:38:03 compute-0 silly_moser[75236]:         "num_mons": 1
Nov 26 11:38:03 compute-0 silly_moser[75236]:     },
Nov 26 11:38:03 compute-0 silly_moser[75236]:     "osdmap": {
Nov 26 11:38:03 compute-0 silly_moser[75236]:         "epoch": 1,
Nov 26 11:38:03 compute-0 silly_moser[75236]:         "num_osds": 0,
Nov 26 11:38:03 compute-0 silly_moser[75236]:         "num_up_osds": 0,
Nov 26 11:38:03 compute-0 silly_moser[75236]:         "osd_up_since": 0,
Nov 26 11:38:03 compute-0 silly_moser[75236]:         "num_in_osds": 0,
Nov 26 11:38:03 compute-0 silly_moser[75236]:         "osd_in_since": 0,
Nov 26 11:38:03 compute-0 silly_moser[75236]:         "num_remapped_pgs": 0
Nov 26 11:38:03 compute-0 silly_moser[75236]:     },
Nov 26 11:38:03 compute-0 silly_moser[75236]:     "pgmap": {
Nov 26 11:38:03 compute-0 silly_moser[75236]:         "pgs_by_state": [],
Nov 26 11:38:03 compute-0 silly_moser[75236]:         "num_pgs": 0,
Nov 26 11:38:03 compute-0 silly_moser[75236]:         "num_pools": 0,
Nov 26 11:38:03 compute-0 silly_moser[75236]:         "num_objects": 0,
Nov 26 11:38:03 compute-0 silly_moser[75236]:         "data_bytes": 0,
Nov 26 11:38:03 compute-0 silly_moser[75236]:         "bytes_used": 0,
Nov 26 11:38:03 compute-0 silly_moser[75236]:         "bytes_avail": 0,
Nov 26 11:38:03 compute-0 silly_moser[75236]:         "bytes_total": 0
Nov 26 11:38:03 compute-0 silly_moser[75236]:     },
Nov 26 11:38:03 compute-0 silly_moser[75236]:     "fsmap": {
Nov 26 11:38:03 compute-0 silly_moser[75236]:         "epoch": 1,
Nov 26 11:38:03 compute-0 silly_moser[75236]:         "by_rank": [],
Nov 26 11:38:03 compute-0 silly_moser[75236]:         "up:standby": 0
Nov 26 11:38:03 compute-0 silly_moser[75236]:     },
Nov 26 11:38:03 compute-0 silly_moser[75236]:     "mgrmap": {
Nov 26 11:38:03 compute-0 silly_moser[75236]:         "available": false,
Nov 26 11:38:03 compute-0 silly_moser[75236]:         "num_standbys": 0,
Nov 26 11:38:03 compute-0 silly_moser[75236]:         "modules": [
Nov 26 11:38:03 compute-0 silly_moser[75236]:             "iostat",
Nov 26 11:38:03 compute-0 silly_moser[75236]:             "nfs",
Nov 26 11:38:03 compute-0 silly_moser[75236]:             "restful"
Nov 26 11:38:03 compute-0 silly_moser[75236]:         ],
Nov 26 11:38:03 compute-0 silly_moser[75236]:         "services": {}
Nov 26 11:38:03 compute-0 silly_moser[75236]:     },
Nov 26 11:38:03 compute-0 silly_moser[75236]:     "servicemap": {
Nov 26 11:38:03 compute-0 silly_moser[75236]:         "epoch": 1,
Nov 26 11:38:03 compute-0 silly_moser[75236]:         "modified": "2025-11-26T11:37:59.453002+0000",
Nov 26 11:38:03 compute-0 silly_moser[75236]:         "services": {}
Nov 26 11:38:03 compute-0 silly_moser[75236]:     },
Nov 26 11:38:03 compute-0 silly_moser[75236]:     "progress_events": {}
Nov 26 11:38:03 compute-0 silly_moser[75236]: }
Nov 26 11:38:03 compute-0 systemd[1]: libpod-c6f3518e1a3c60f255fe025352c8fe58335383f6b8ecaabf92128a01a835a557.scope: Deactivated successfully.
Nov 26 11:38:03 compute-0 podman[75198]: 2025-11-26 11:38:03.411526202 +0000 UTC m=+0.417508601 container died c6f3518e1a3c60f255fe025352c8fe58335383f6b8ecaabf92128a01a835a557 (image=quay.io/ceph/ceph:v18, name=silly_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 26 11:38:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9068b7ce21c793695b523858c75499087ef2fc1ca0b72b980aa3ae5bef26041-merged.mount: Deactivated successfully.
Nov 26 11:38:03 compute-0 podman[75198]: 2025-11-26 11:38:03.436056345 +0000 UTC m=+0.442038744 container remove c6f3518e1a3c60f255fe025352c8fe58335383f6b8ecaabf92128a01a835a557 (image=quay.io/ceph/ceph:v18, name=silly_moser, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Nov 26 11:38:03 compute-0 systemd[1]: libpod-conmon-c6f3518e1a3c60f255fe025352c8fe58335383f6b8ecaabf92128a01a835a557.scope: Deactivated successfully.
Nov 26 11:38:03 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3279887407' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 26 11:38:03 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-mwrktr[75193]: 2025-11-26T11:38:03.598+0000 7f41bcb38140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 26 11:38:03 compute-0 ceph-mgr[75197]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 26 11:38:03 compute-0 ceph-mgr[75197]: mgr[py] Loading python module 'cephadm'
Nov 26 11:38:04 compute-0 chronyd[58536]: Selected source 23.186.168.130 (pool.ntp.org)
Nov 26 11:38:05 compute-0 ceph-mgr[75197]: mgr[py] Loading python module 'crash'
Nov 26 11:38:05 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-mwrktr[75193]: 2025-11-26T11:38:05.464+0000 7f41bcb38140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 26 11:38:05 compute-0 ceph-mgr[75197]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 26 11:38:05 compute-0 ceph-mgr[75197]: mgr[py] Loading python module 'dashboard'
Nov 26 11:38:05 compute-0 podman[75283]: 2025-11-26 11:38:05.480237765 +0000 UTC m=+0.027541279 container create 00f86555f4a645093aa26aee33d56964321d2f95f025c0d450a350b45cd7450a (image=quay.io/ceph/ceph:v18, name=inspiring_keldysh, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:38:05 compute-0 systemd[1]: Started libpod-conmon-00f86555f4a645093aa26aee33d56964321d2f95f025c0d450a350b45cd7450a.scope.
Nov 26 11:38:05 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:38:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f77ae032773520cbb7361c2e2d16a4f74b6f4e9e0f2a044f474663c564c5f6a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f77ae032773520cbb7361c2e2d16a4f74b6f4e9e0f2a044f474663c564c5f6a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f77ae032773520cbb7361c2e2d16a4f74b6f4e9e0f2a044f474663c564c5f6a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:05 compute-0 podman[75283]: 2025-11-26 11:38:05.541061863 +0000 UTC m=+0.088365378 container init 00f86555f4a645093aa26aee33d56964321d2f95f025c0d450a350b45cd7450a (image=quay.io/ceph/ceph:v18, name=inspiring_keldysh, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 26 11:38:05 compute-0 podman[75283]: 2025-11-26 11:38:05.54608402 +0000 UTC m=+0.093387544 container start 00f86555f4a645093aa26aee33d56964321d2f95f025c0d450a350b45cd7450a (image=quay.io/ceph/ceph:v18, name=inspiring_keldysh, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:38:05 compute-0 podman[75283]: 2025-11-26 11:38:05.54720447 +0000 UTC m=+0.094507984 container attach 00f86555f4a645093aa26aee33d56964321d2f95f025c0d450a350b45cd7450a (image=quay.io/ceph/ceph:v18, name=inspiring_keldysh, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:38:05 compute-0 podman[75283]: 2025-11-26 11:38:05.468820147 +0000 UTC m=+0.016123681 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:38:05 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 26 11:38:05 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/797440059' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 26 11:38:05 compute-0 inspiring_keldysh[75296]: 
Nov 26 11:38:05 compute-0 inspiring_keldysh[75296]: {
Nov 26 11:38:05 compute-0 inspiring_keldysh[75296]:     "fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:38:05 compute-0 inspiring_keldysh[75296]:     "health": {
Nov 26 11:38:05 compute-0 inspiring_keldysh[75296]:         "status": "HEALTH_OK",
Nov 26 11:38:05 compute-0 inspiring_keldysh[75296]:         "checks": {},
Nov 26 11:38:05 compute-0 inspiring_keldysh[75296]:         "mutes": []
Nov 26 11:38:05 compute-0 inspiring_keldysh[75296]:     },
Nov 26 11:38:05 compute-0 inspiring_keldysh[75296]:     "election_epoch": 5,
Nov 26 11:38:05 compute-0 inspiring_keldysh[75296]:     "quorum": [
Nov 26 11:38:05 compute-0 inspiring_keldysh[75296]:         0
Nov 26 11:38:05 compute-0 inspiring_keldysh[75296]:     ],
Nov 26 11:38:05 compute-0 inspiring_keldysh[75296]:     "quorum_names": [
Nov 26 11:38:05 compute-0 inspiring_keldysh[75296]:         "compute-0"
Nov 26 11:38:05 compute-0 inspiring_keldysh[75296]:     ],
Nov 26 11:38:05 compute-0 inspiring_keldysh[75296]:     "quorum_age": 4,
Nov 26 11:38:05 compute-0 inspiring_keldysh[75296]:     "monmap": {
Nov 26 11:38:05 compute-0 inspiring_keldysh[75296]:         "epoch": 1,
Nov 26 11:38:05 compute-0 inspiring_keldysh[75296]:         "min_mon_release_name": "reef",
Nov 26 11:38:05 compute-0 inspiring_keldysh[75296]:         "num_mons": 1
Nov 26 11:38:05 compute-0 inspiring_keldysh[75296]:     },
Nov 26 11:38:05 compute-0 inspiring_keldysh[75296]:     "osdmap": {
Nov 26 11:38:05 compute-0 inspiring_keldysh[75296]:         "epoch": 1,
Nov 26 11:38:05 compute-0 inspiring_keldysh[75296]:         "num_osds": 0,
Nov 26 11:38:05 compute-0 inspiring_keldysh[75296]:         "num_up_osds": 0,
Nov 26 11:38:05 compute-0 inspiring_keldysh[75296]:         "osd_up_since": 0,
Nov 26 11:38:05 compute-0 inspiring_keldysh[75296]:         "num_in_osds": 0,
Nov 26 11:38:05 compute-0 inspiring_keldysh[75296]:         "osd_in_since": 0,
Nov 26 11:38:05 compute-0 inspiring_keldysh[75296]:         "num_remapped_pgs": 0
Nov 26 11:38:05 compute-0 inspiring_keldysh[75296]:     },
Nov 26 11:38:05 compute-0 inspiring_keldysh[75296]:     "pgmap": {
Nov 26 11:38:05 compute-0 inspiring_keldysh[75296]:         "pgs_by_state": [],
Nov 26 11:38:05 compute-0 inspiring_keldysh[75296]:         "num_pgs": 0,
Nov 26 11:38:05 compute-0 inspiring_keldysh[75296]:         "num_pools": 0,
Nov 26 11:38:05 compute-0 inspiring_keldysh[75296]:         "num_objects": 0,
Nov 26 11:38:05 compute-0 inspiring_keldysh[75296]:         "data_bytes": 0,
Nov 26 11:38:05 compute-0 inspiring_keldysh[75296]:         "bytes_used": 0,
Nov 26 11:38:05 compute-0 inspiring_keldysh[75296]:         "bytes_avail": 0,
Nov 26 11:38:05 compute-0 inspiring_keldysh[75296]:         "bytes_total": 0
Nov 26 11:38:05 compute-0 inspiring_keldysh[75296]:     },
Nov 26 11:38:05 compute-0 inspiring_keldysh[75296]:     "fsmap": {
Nov 26 11:38:05 compute-0 inspiring_keldysh[75296]:         "epoch": 1,
Nov 26 11:38:05 compute-0 inspiring_keldysh[75296]:         "by_rank": [],
Nov 26 11:38:05 compute-0 inspiring_keldysh[75296]:         "up:standby": 0
Nov 26 11:38:05 compute-0 inspiring_keldysh[75296]:     },
Nov 26 11:38:05 compute-0 inspiring_keldysh[75296]:     "mgrmap": {
Nov 26 11:38:05 compute-0 inspiring_keldysh[75296]:         "available": false,
Nov 26 11:38:05 compute-0 inspiring_keldysh[75296]:         "num_standbys": 0,
Nov 26 11:38:05 compute-0 inspiring_keldysh[75296]:         "modules": [
Nov 26 11:38:05 compute-0 inspiring_keldysh[75296]:             "iostat",
Nov 26 11:38:05 compute-0 inspiring_keldysh[75296]:             "nfs",
Nov 26 11:38:05 compute-0 inspiring_keldysh[75296]:             "restful"
Nov 26 11:38:05 compute-0 inspiring_keldysh[75296]:         ],
Nov 26 11:38:05 compute-0 inspiring_keldysh[75296]:         "services": {}
Nov 26 11:38:05 compute-0 inspiring_keldysh[75296]:     },
Nov 26 11:38:05 compute-0 inspiring_keldysh[75296]:     "servicemap": {
Nov 26 11:38:05 compute-0 inspiring_keldysh[75296]:         "epoch": 1,
Nov 26 11:38:05 compute-0 inspiring_keldysh[75296]:         "modified": "2025-11-26T11:37:59.453002+0000",
Nov 26 11:38:05 compute-0 inspiring_keldysh[75296]:         "services": {}
Nov 26 11:38:05 compute-0 inspiring_keldysh[75296]:     },
Nov 26 11:38:05 compute-0 inspiring_keldysh[75296]:     "progress_events": {}
Nov 26 11:38:05 compute-0 inspiring_keldysh[75296]: }
Nov 26 11:38:05 compute-0 systemd[1]: libpod-00f86555f4a645093aa26aee33d56964321d2f95f025c0d450a350b45cd7450a.scope: Deactivated successfully.
Nov 26 11:38:05 compute-0 podman[75283]: 2025-11-26 11:38:05.86799871 +0000 UTC m=+0.415302224 container died 00f86555f4a645093aa26aee33d56964321d2f95f025c0d450a350b45cd7450a (image=quay.io/ceph/ceph:v18, name=inspiring_keldysh, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 26 11:38:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f77ae032773520cbb7361c2e2d16a4f74b6f4e9e0f2a044f474663c564c5f6a-merged.mount: Deactivated successfully.
Nov 26 11:38:05 compute-0 podman[75283]: 2025-11-26 11:38:05.889440835 +0000 UTC m=+0.436744349 container remove 00f86555f4a645093aa26aee33d56964321d2f95f025c0d450a350b45cd7450a (image=quay.io/ceph/ceph:v18, name=inspiring_keldysh, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:38:05 compute-0 systemd[1]: libpod-conmon-00f86555f4a645093aa26aee33d56964321d2f95f025c0d450a350b45cd7450a.scope: Deactivated successfully.
Nov 26 11:38:05 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/797440059' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 26 11:38:06 compute-0 ceph-mgr[75197]: mgr[py] Loading python module 'devicehealth'
Nov 26 11:38:06 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-mwrktr[75193]: 2025-11-26T11:38:06.896+0000 7f41bcb38140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 26 11:38:06 compute-0 ceph-mgr[75197]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 26 11:38:06 compute-0 ceph-mgr[75197]: mgr[py] Loading python module 'diskprediction_local'
Nov 26 11:38:07 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-mwrktr[75193]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 26 11:38:07 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-mwrktr[75193]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 26 11:38:07 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-mwrktr[75193]:   from numpy import show_config as show_numpy_config
Nov 26 11:38:07 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-mwrktr[75193]: 2025-11-26T11:38:07.353+0000 7f41bcb38140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 26 11:38:07 compute-0 ceph-mgr[75197]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 26 11:38:07 compute-0 ceph-mgr[75197]: mgr[py] Loading python module 'influx'
Nov 26 11:38:07 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-mwrktr[75193]: 2025-11-26T11:38:07.561+0000 7f41bcb38140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 26 11:38:07 compute-0 ceph-mgr[75197]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 26 11:38:07 compute-0 ceph-mgr[75197]: mgr[py] Loading python module 'insights'
Nov 26 11:38:07 compute-0 ceph-mgr[75197]: mgr[py] Loading python module 'iostat'
Nov 26 11:38:07 compute-0 podman[75331]: 2025-11-26 11:38:07.9314332 +0000 UTC m=+0.026519155 container create ff5b581120a3b5960e4d0ff45f76033435d76bf38e4468862a833d3a58bc7177 (image=quay.io/ceph/ceph:v18, name=suspicious_albattani, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 26 11:38:07 compute-0 systemd[1]: Started libpod-conmon-ff5b581120a3b5960e4d0ff45f76033435d76bf38e4468862a833d3a58bc7177.scope.
Nov 26 11:38:07 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-mwrktr[75193]: 2025-11-26T11:38:07.975+0000 7f41bcb38140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 26 11:38:07 compute-0 ceph-mgr[75197]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 26 11:38:07 compute-0 ceph-mgr[75197]: mgr[py] Loading python module 'k8sevents'
Nov 26 11:38:07 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:38:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/213d7f028aaf0f6e2aab97d9c9e28137b8739bc2bb8581d79c2d9d3276349466/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/213d7f028aaf0f6e2aab97d9c9e28137b8739bc2bb8581d79c2d9d3276349466/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/213d7f028aaf0f6e2aab97d9c9e28137b8739bc2bb8581d79c2d9d3276349466/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:07 compute-0 podman[75331]: 2025-11-26 11:38:07.991212877 +0000 UTC m=+0.086298842 container init ff5b581120a3b5960e4d0ff45f76033435d76bf38e4468862a833d3a58bc7177 (image=quay.io/ceph/ceph:v18, name=suspicious_albattani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 26 11:38:07 compute-0 podman[75331]: 2025-11-26 11:38:07.99590605 +0000 UTC m=+0.090991994 container start ff5b581120a3b5960e4d0ff45f76033435d76bf38e4468862a833d3a58bc7177 (image=quay.io/ceph/ceph:v18, name=suspicious_albattani, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 26 11:38:07 compute-0 podman[75331]: 2025-11-26 11:38:07.997069575 +0000 UTC m=+0.092155530 container attach ff5b581120a3b5960e4d0ff45f76033435d76bf38e4468862a833d3a58bc7177 (image=quay.io/ceph/ceph:v18, name=suspicious_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:38:08 compute-0 podman[75331]: 2025-11-26 11:38:07.920264311 +0000 UTC m=+0.015350277 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:38:08 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 26 11:38:08 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3518297120' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 26 11:38:08 compute-0 suspicious_albattani[75344]: 
Nov 26 11:38:08 compute-0 suspicious_albattani[75344]: {
Nov 26 11:38:08 compute-0 suspicious_albattani[75344]:     "fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:38:08 compute-0 suspicious_albattani[75344]:     "health": {
Nov 26 11:38:08 compute-0 suspicious_albattani[75344]:         "status": "HEALTH_OK",
Nov 26 11:38:08 compute-0 suspicious_albattani[75344]:         "checks": {},
Nov 26 11:38:08 compute-0 suspicious_albattani[75344]:         "mutes": []
Nov 26 11:38:08 compute-0 suspicious_albattani[75344]:     },
Nov 26 11:38:08 compute-0 suspicious_albattani[75344]:     "election_epoch": 5,
Nov 26 11:38:08 compute-0 suspicious_albattani[75344]:     "quorum": [
Nov 26 11:38:08 compute-0 suspicious_albattani[75344]:         0
Nov 26 11:38:08 compute-0 suspicious_albattani[75344]:     ],
Nov 26 11:38:08 compute-0 suspicious_albattani[75344]:     "quorum_names": [
Nov 26 11:38:08 compute-0 suspicious_albattani[75344]:         "compute-0"
Nov 26 11:38:08 compute-0 suspicious_albattani[75344]:     ],
Nov 26 11:38:08 compute-0 suspicious_albattani[75344]:     "quorum_age": 6,
Nov 26 11:38:08 compute-0 suspicious_albattani[75344]:     "monmap": {
Nov 26 11:38:08 compute-0 suspicious_albattani[75344]:         "epoch": 1,
Nov 26 11:38:08 compute-0 suspicious_albattani[75344]:         "min_mon_release_name": "reef",
Nov 26 11:38:08 compute-0 suspicious_albattani[75344]:         "num_mons": 1
Nov 26 11:38:08 compute-0 suspicious_albattani[75344]:     },
Nov 26 11:38:08 compute-0 suspicious_albattani[75344]:     "osdmap": {
Nov 26 11:38:08 compute-0 suspicious_albattani[75344]:         "epoch": 1,
Nov 26 11:38:08 compute-0 suspicious_albattani[75344]:         "num_osds": 0,
Nov 26 11:38:08 compute-0 suspicious_albattani[75344]:         "num_up_osds": 0,
Nov 26 11:38:08 compute-0 suspicious_albattani[75344]:         "osd_up_since": 0,
Nov 26 11:38:08 compute-0 suspicious_albattani[75344]:         "num_in_osds": 0,
Nov 26 11:38:08 compute-0 suspicious_albattani[75344]:         "osd_in_since": 0,
Nov 26 11:38:08 compute-0 suspicious_albattani[75344]:         "num_remapped_pgs": 0
Nov 26 11:38:08 compute-0 suspicious_albattani[75344]:     },
Nov 26 11:38:08 compute-0 suspicious_albattani[75344]:     "pgmap": {
Nov 26 11:38:08 compute-0 suspicious_albattani[75344]:         "pgs_by_state": [],
Nov 26 11:38:08 compute-0 suspicious_albattani[75344]:         "num_pgs": 0,
Nov 26 11:38:08 compute-0 suspicious_albattani[75344]:         "num_pools": 0,
Nov 26 11:38:08 compute-0 suspicious_albattani[75344]:         "num_objects": 0,
Nov 26 11:38:08 compute-0 suspicious_albattani[75344]:         "data_bytes": 0,
Nov 26 11:38:08 compute-0 suspicious_albattani[75344]:         "bytes_used": 0,
Nov 26 11:38:08 compute-0 suspicious_albattani[75344]:         "bytes_avail": 0,
Nov 26 11:38:08 compute-0 suspicious_albattani[75344]:         "bytes_total": 0
Nov 26 11:38:08 compute-0 suspicious_albattani[75344]:     },
Nov 26 11:38:08 compute-0 suspicious_albattani[75344]:     "fsmap": {
Nov 26 11:38:08 compute-0 suspicious_albattani[75344]:         "epoch": 1,
Nov 26 11:38:08 compute-0 suspicious_albattani[75344]:         "by_rank": [],
Nov 26 11:38:08 compute-0 suspicious_albattani[75344]:         "up:standby": 0
Nov 26 11:38:08 compute-0 suspicious_albattani[75344]:     },
Nov 26 11:38:08 compute-0 suspicious_albattani[75344]:     "mgrmap": {
Nov 26 11:38:08 compute-0 suspicious_albattani[75344]:         "available": false,
Nov 26 11:38:08 compute-0 suspicious_albattani[75344]:         "num_standbys": 0,
Nov 26 11:38:08 compute-0 suspicious_albattani[75344]:         "modules": [
Nov 26 11:38:08 compute-0 suspicious_albattani[75344]:             "iostat",
Nov 26 11:38:08 compute-0 suspicious_albattani[75344]:             "nfs",
Nov 26 11:38:08 compute-0 suspicious_albattani[75344]:             "restful"
Nov 26 11:38:08 compute-0 suspicious_albattani[75344]:         ],
Nov 26 11:38:08 compute-0 suspicious_albattani[75344]:         "services": {}
Nov 26 11:38:08 compute-0 suspicious_albattani[75344]:     },
Nov 26 11:38:08 compute-0 suspicious_albattani[75344]:     "servicemap": {
Nov 26 11:38:08 compute-0 suspicious_albattani[75344]:         "epoch": 1,
Nov 26 11:38:08 compute-0 suspicious_albattani[75344]:         "modified": "2025-11-26T11:37:59.453002+0000",
Nov 26 11:38:08 compute-0 suspicious_albattani[75344]:         "services": {}
Nov 26 11:38:08 compute-0 suspicious_albattani[75344]:     },
Nov 26 11:38:08 compute-0 suspicious_albattani[75344]:     "progress_events": {}
Nov 26 11:38:08 compute-0 suspicious_albattani[75344]: }
Nov 26 11:38:08 compute-0 systemd[1]: libpod-ff5b581120a3b5960e4d0ff45f76033435d76bf38e4468862a833d3a58bc7177.scope: Deactivated successfully.
Nov 26 11:38:08 compute-0 podman[75370]: 2025-11-26 11:38:08.344706615 +0000 UTC m=+0.015855639 container died ff5b581120a3b5960e4d0ff45f76033435d76bf38e4468862a833d3a58bc7177 (image=quay.io/ceph/ceph:v18, name=suspicious_albattani, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 26 11:38:08 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3518297120' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 26 11:38:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-213d7f028aaf0f6e2aab97d9c9e28137b8739bc2bb8581d79c2d9d3276349466-merged.mount: Deactivated successfully.
Nov 26 11:38:08 compute-0 podman[75370]: 2025-11-26 11:38:08.366987976 +0000 UTC m=+0.038136981 container remove ff5b581120a3b5960e4d0ff45f76033435d76bf38e4468862a833d3a58bc7177 (image=quay.io/ceph/ceph:v18, name=suspicious_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 26 11:38:08 compute-0 systemd[1]: libpod-conmon-ff5b581120a3b5960e4d0ff45f76033435d76bf38e4468862a833d3a58bc7177.scope: Deactivated successfully.
Nov 26 11:38:09 compute-0 ceph-mgr[75197]: mgr[py] Loading python module 'localpool'
Nov 26 11:38:09 compute-0 ceph-mgr[75197]: mgr[py] Loading python module 'mds_autoscaler'
Nov 26 11:38:10 compute-0 ceph-mgr[75197]: mgr[py] Loading python module 'mirroring'
Nov 26 11:38:10 compute-0 podman[75382]: 2025-11-26 11:38:10.413480452 +0000 UTC m=+0.027710273 container create 23cd2be48e789472d1d3994213b7859afb792e1837668b24881086bad5d17566 (image=quay.io/ceph/ceph:v18, name=priceless_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:38:10 compute-0 systemd[1]: Started libpod-conmon-23cd2be48e789472d1d3994213b7859afb792e1837668b24881086bad5d17566.scope.
Nov 26 11:38:10 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:38:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbbd72d47377b553e0b23428ac0826acae847eaf7f3c06deacbc0ddf74a7f025/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbbd72d47377b553e0b23428ac0826acae847eaf7f3c06deacbc0ddf74a7f025/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbbd72d47377b553e0b23428ac0826acae847eaf7f3c06deacbc0ddf74a7f025/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:10 compute-0 podman[75382]: 2025-11-26 11:38:10.470000269 +0000 UTC m=+0.084230090 container init 23cd2be48e789472d1d3994213b7859afb792e1837668b24881086bad5d17566 (image=quay.io/ceph/ceph:v18, name=priceless_cray, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:38:10 compute-0 podman[75382]: 2025-11-26 11:38:10.473898292 +0000 UTC m=+0.088128113 container start 23cd2be48e789472d1d3994213b7859afb792e1837668b24881086bad5d17566 (image=quay.io/ceph/ceph:v18, name=priceless_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:38:10 compute-0 podman[75382]: 2025-11-26 11:38:10.47486183 +0000 UTC m=+0.089091651 container attach 23cd2be48e789472d1d3994213b7859afb792e1837668b24881086bad5d17566 (image=quay.io/ceph/ceph:v18, name=priceless_cray, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:38:10 compute-0 ceph-mgr[75197]: mgr[py] Loading python module 'nfs'
Nov 26 11:38:10 compute-0 podman[75382]: 2025-11-26 11:38:10.401698306 +0000 UTC m=+0.015928147 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:38:10 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 26 11:38:10 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1178494291' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 26 11:38:10 compute-0 priceless_cray[75395]: 
Nov 26 11:38:10 compute-0 priceless_cray[75395]: {
Nov 26 11:38:10 compute-0 priceless_cray[75395]:     "fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:38:10 compute-0 priceless_cray[75395]:     "health": {
Nov 26 11:38:10 compute-0 priceless_cray[75395]:         "status": "HEALTH_OK",
Nov 26 11:38:10 compute-0 priceless_cray[75395]:         "checks": {},
Nov 26 11:38:10 compute-0 priceless_cray[75395]:         "mutes": []
Nov 26 11:38:10 compute-0 priceless_cray[75395]:     },
Nov 26 11:38:10 compute-0 priceless_cray[75395]:     "election_epoch": 5,
Nov 26 11:38:10 compute-0 priceless_cray[75395]:     "quorum": [
Nov 26 11:38:10 compute-0 priceless_cray[75395]:         0
Nov 26 11:38:10 compute-0 priceless_cray[75395]:     ],
Nov 26 11:38:10 compute-0 priceless_cray[75395]:     "quorum_names": [
Nov 26 11:38:10 compute-0 priceless_cray[75395]:         "compute-0"
Nov 26 11:38:10 compute-0 priceless_cray[75395]:     ],
Nov 26 11:38:10 compute-0 priceless_cray[75395]:     "quorum_age": 9,
Nov 26 11:38:10 compute-0 priceless_cray[75395]:     "monmap": {
Nov 26 11:38:10 compute-0 priceless_cray[75395]:         "epoch": 1,
Nov 26 11:38:10 compute-0 priceless_cray[75395]:         "min_mon_release_name": "reef",
Nov 26 11:38:10 compute-0 priceless_cray[75395]:         "num_mons": 1
Nov 26 11:38:10 compute-0 priceless_cray[75395]:     },
Nov 26 11:38:10 compute-0 priceless_cray[75395]:     "osdmap": {
Nov 26 11:38:10 compute-0 priceless_cray[75395]:         "epoch": 1,
Nov 26 11:38:10 compute-0 priceless_cray[75395]:         "num_osds": 0,
Nov 26 11:38:10 compute-0 priceless_cray[75395]:         "num_up_osds": 0,
Nov 26 11:38:10 compute-0 priceless_cray[75395]:         "osd_up_since": 0,
Nov 26 11:38:10 compute-0 priceless_cray[75395]:         "num_in_osds": 0,
Nov 26 11:38:10 compute-0 priceless_cray[75395]:         "osd_in_since": 0,
Nov 26 11:38:10 compute-0 priceless_cray[75395]:         "num_remapped_pgs": 0
Nov 26 11:38:10 compute-0 priceless_cray[75395]:     },
Nov 26 11:38:10 compute-0 priceless_cray[75395]:     "pgmap": {
Nov 26 11:38:10 compute-0 priceless_cray[75395]:         "pgs_by_state": [],
Nov 26 11:38:10 compute-0 priceless_cray[75395]:         "num_pgs": 0,
Nov 26 11:38:10 compute-0 priceless_cray[75395]:         "num_pools": 0,
Nov 26 11:38:10 compute-0 priceless_cray[75395]:         "num_objects": 0,
Nov 26 11:38:10 compute-0 priceless_cray[75395]:         "data_bytes": 0,
Nov 26 11:38:10 compute-0 priceless_cray[75395]:         "bytes_used": 0,
Nov 26 11:38:10 compute-0 priceless_cray[75395]:         "bytes_avail": 0,
Nov 26 11:38:10 compute-0 priceless_cray[75395]:         "bytes_total": 0
Nov 26 11:38:10 compute-0 priceless_cray[75395]:     },
Nov 26 11:38:10 compute-0 priceless_cray[75395]:     "fsmap": {
Nov 26 11:38:10 compute-0 priceless_cray[75395]:         "epoch": 1,
Nov 26 11:38:10 compute-0 priceless_cray[75395]:         "by_rank": [],
Nov 26 11:38:10 compute-0 priceless_cray[75395]:         "up:standby": 0
Nov 26 11:38:10 compute-0 priceless_cray[75395]:     },
Nov 26 11:38:10 compute-0 priceless_cray[75395]:     "mgrmap": {
Nov 26 11:38:10 compute-0 priceless_cray[75395]:         "available": false,
Nov 26 11:38:10 compute-0 priceless_cray[75395]:         "num_standbys": 0,
Nov 26 11:38:10 compute-0 priceless_cray[75395]:         "modules": [
Nov 26 11:38:10 compute-0 priceless_cray[75395]:             "iostat",
Nov 26 11:38:10 compute-0 priceless_cray[75395]:             "nfs",
Nov 26 11:38:10 compute-0 priceless_cray[75395]:             "restful"
Nov 26 11:38:10 compute-0 priceless_cray[75395]:         ],
Nov 26 11:38:10 compute-0 priceless_cray[75395]:         "services": {}
Nov 26 11:38:10 compute-0 priceless_cray[75395]:     },
Nov 26 11:38:10 compute-0 priceless_cray[75395]:     "servicemap": {
Nov 26 11:38:10 compute-0 priceless_cray[75395]:         "epoch": 1,
Nov 26 11:38:10 compute-0 priceless_cray[75395]:         "modified": "2025-11-26T11:37:59.453002+0000",
Nov 26 11:38:10 compute-0 priceless_cray[75395]:         "services": {}
Nov 26 11:38:10 compute-0 priceless_cray[75395]:     },
Nov 26 11:38:10 compute-0 priceless_cray[75395]:     "progress_events": {}
Nov 26 11:38:10 compute-0 priceless_cray[75395]: }
Nov 26 11:38:10 compute-0 systemd[1]: libpod-23cd2be48e789472d1d3994213b7859afb792e1837668b24881086bad5d17566.scope: Deactivated successfully.
Nov 26 11:38:10 compute-0 conmon[75395]: conmon 23cd2be48e789472d1d3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-23cd2be48e789472d1d3994213b7859afb792e1837668b24881086bad5d17566.scope/container/memory.events
Nov 26 11:38:10 compute-0 podman[75382]: 2025-11-26 11:38:10.796526266 +0000 UTC m=+0.410756088 container died 23cd2be48e789472d1d3994213b7859afb792e1837668b24881086bad5d17566 (image=quay.io/ceph/ceph:v18, name=priceless_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 26 11:38:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-cbbd72d47377b553e0b23428ac0826acae847eaf7f3c06deacbc0ddf74a7f025-merged.mount: Deactivated successfully.
Nov 26 11:38:10 compute-0 podman[75382]: 2025-11-26 11:38:10.817357932 +0000 UTC m=+0.431587754 container remove 23cd2be48e789472d1d3994213b7859afb792e1837668b24881086bad5d17566 (image=quay.io/ceph/ceph:v18, name=priceless_cray, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 26 11:38:10 compute-0 systemd[1]: libpod-conmon-23cd2be48e789472d1d3994213b7859afb792e1837668b24881086bad5d17566.scope: Deactivated successfully.
Nov 26 11:38:10 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1178494291' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 26 11:38:11 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-mwrktr[75193]: 2025-11-26T11:38:11.072+0000 7f41bcb38140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 26 11:38:11 compute-0 ceph-mgr[75197]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 26 11:38:11 compute-0 ceph-mgr[75197]: mgr[py] Loading python module 'orchestrator'
Nov 26 11:38:11 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-mwrktr[75193]: 2025-11-26T11:38:11.640+0000 7f41bcb38140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 26 11:38:11 compute-0 ceph-mgr[75197]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 26 11:38:11 compute-0 ceph-mgr[75197]: mgr[py] Loading python module 'osd_perf_query'
Nov 26 11:38:11 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-mwrktr[75193]: 2025-11-26T11:38:11.876+0000 7f41bcb38140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 26 11:38:11 compute-0 ceph-mgr[75197]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 26 11:38:11 compute-0 ceph-mgr[75197]: mgr[py] Loading python module 'osd_support'
Nov 26 11:38:12 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-mwrktr[75193]: 2025-11-26T11:38:12.080+0000 7f41bcb38140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 26 11:38:12 compute-0 ceph-mgr[75197]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 26 11:38:12 compute-0 ceph-mgr[75197]: mgr[py] Loading python module 'pg_autoscaler'
Nov 26 11:38:12 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-mwrktr[75193]: 2025-11-26T11:38:12.318+0000 7f41bcb38140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 26 11:38:12 compute-0 ceph-mgr[75197]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 26 11:38:12 compute-0 ceph-mgr[75197]: mgr[py] Loading python module 'progress'
Nov 26 11:38:12 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-mwrktr[75193]: 2025-11-26T11:38:12.527+0000 7f41bcb38140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 26 11:38:12 compute-0 ceph-mgr[75197]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 26 11:38:12 compute-0 ceph-mgr[75197]: mgr[py] Loading python module 'prometheus'
Nov 26 11:38:12 compute-0 podman[75430]: 2025-11-26 11:38:12.858577071 +0000 UTC m=+0.024736763 container create b53a8be908f07215d80089c8cdb9e0e9f86b2538195f9c90e00d438a848aaefa (image=quay.io/ceph/ceph:v18, name=busy_jang, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 26 11:38:12 compute-0 systemd[1]: Started libpod-conmon-b53a8be908f07215d80089c8cdb9e0e9f86b2538195f9c90e00d438a848aaefa.scope.
Nov 26 11:38:12 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:38:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cca9c0bb319c4b9106a6351239ed9fa79bd2385009ca23b62f31686a0e73d92/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cca9c0bb319c4b9106a6351239ed9fa79bd2385009ca23b62f31686a0e73d92/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cca9c0bb319c4b9106a6351239ed9fa79bd2385009ca23b62f31686a0e73d92/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:12 compute-0 podman[75430]: 2025-11-26 11:38:12.902119117 +0000 UTC m=+0.068278819 container init b53a8be908f07215d80089c8cdb9e0e9f86b2538195f9c90e00d438a848aaefa (image=quay.io/ceph/ceph:v18, name=busy_jang, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 26 11:38:12 compute-0 podman[75430]: 2025-11-26 11:38:12.906424479 +0000 UTC m=+0.072584171 container start b53a8be908f07215d80089c8cdb9e0e9f86b2538195f9c90e00d438a848aaefa (image=quay.io/ceph/ceph:v18, name=busy_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 26 11:38:12 compute-0 podman[75430]: 2025-11-26 11:38:12.909657198 +0000 UTC m=+0.075816900 container attach b53a8be908f07215d80089c8cdb9e0e9f86b2538195f9c90e00d438a848aaefa (image=quay.io/ceph/ceph:v18, name=busy_jang, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True)
Nov 26 11:38:12 compute-0 podman[75430]: 2025-11-26 11:38:12.849101336 +0000 UTC m=+0.015261048 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:38:13 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 26 11:38:13 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/815257502' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 26 11:38:13 compute-0 busy_jang[75444]: 
Nov 26 11:38:13 compute-0 busy_jang[75444]: {
Nov 26 11:38:13 compute-0 busy_jang[75444]:     "fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:38:13 compute-0 busy_jang[75444]:     "health": {
Nov 26 11:38:13 compute-0 busy_jang[75444]:         "status": "HEALTH_OK",
Nov 26 11:38:13 compute-0 busy_jang[75444]:         "checks": {},
Nov 26 11:38:13 compute-0 busy_jang[75444]:         "mutes": []
Nov 26 11:38:13 compute-0 busy_jang[75444]:     },
Nov 26 11:38:13 compute-0 busy_jang[75444]:     "election_epoch": 5,
Nov 26 11:38:13 compute-0 busy_jang[75444]:     "quorum": [
Nov 26 11:38:13 compute-0 busy_jang[75444]:         0
Nov 26 11:38:13 compute-0 busy_jang[75444]:     ],
Nov 26 11:38:13 compute-0 busy_jang[75444]:     "quorum_names": [
Nov 26 11:38:13 compute-0 busy_jang[75444]:         "compute-0"
Nov 26 11:38:13 compute-0 busy_jang[75444]:     ],
Nov 26 11:38:13 compute-0 busy_jang[75444]:     "quorum_age": 11,
Nov 26 11:38:13 compute-0 busy_jang[75444]:     "monmap": {
Nov 26 11:38:13 compute-0 busy_jang[75444]:         "epoch": 1,
Nov 26 11:38:13 compute-0 busy_jang[75444]:         "min_mon_release_name": "reef",
Nov 26 11:38:13 compute-0 busy_jang[75444]:         "num_mons": 1
Nov 26 11:38:13 compute-0 busy_jang[75444]:     },
Nov 26 11:38:13 compute-0 busy_jang[75444]:     "osdmap": {
Nov 26 11:38:13 compute-0 busy_jang[75444]:         "epoch": 1,
Nov 26 11:38:13 compute-0 busy_jang[75444]:         "num_osds": 0,
Nov 26 11:38:13 compute-0 busy_jang[75444]:         "num_up_osds": 0,
Nov 26 11:38:13 compute-0 busy_jang[75444]:         "osd_up_since": 0,
Nov 26 11:38:13 compute-0 busy_jang[75444]:         "num_in_osds": 0,
Nov 26 11:38:13 compute-0 busy_jang[75444]:         "osd_in_since": 0,
Nov 26 11:38:13 compute-0 busy_jang[75444]:         "num_remapped_pgs": 0
Nov 26 11:38:13 compute-0 busy_jang[75444]:     },
Nov 26 11:38:13 compute-0 busy_jang[75444]:     "pgmap": {
Nov 26 11:38:13 compute-0 busy_jang[75444]:         "pgs_by_state": [],
Nov 26 11:38:13 compute-0 busy_jang[75444]:         "num_pgs": 0,
Nov 26 11:38:13 compute-0 busy_jang[75444]:         "num_pools": 0,
Nov 26 11:38:13 compute-0 busy_jang[75444]:         "num_objects": 0,
Nov 26 11:38:13 compute-0 busy_jang[75444]:         "data_bytes": 0,
Nov 26 11:38:13 compute-0 busy_jang[75444]:         "bytes_used": 0,
Nov 26 11:38:13 compute-0 busy_jang[75444]:         "bytes_avail": 0,
Nov 26 11:38:13 compute-0 busy_jang[75444]:         "bytes_total": 0
Nov 26 11:38:13 compute-0 busy_jang[75444]:     },
Nov 26 11:38:13 compute-0 busy_jang[75444]:     "fsmap": {
Nov 26 11:38:13 compute-0 busy_jang[75444]:         "epoch": 1,
Nov 26 11:38:13 compute-0 busy_jang[75444]:         "by_rank": [],
Nov 26 11:38:13 compute-0 busy_jang[75444]:         "up:standby": 0
Nov 26 11:38:13 compute-0 busy_jang[75444]:     },
Nov 26 11:38:13 compute-0 busy_jang[75444]:     "mgrmap": {
Nov 26 11:38:13 compute-0 busy_jang[75444]:         "available": false,
Nov 26 11:38:13 compute-0 busy_jang[75444]:         "num_standbys": 0,
Nov 26 11:38:13 compute-0 busy_jang[75444]:         "modules": [
Nov 26 11:38:13 compute-0 busy_jang[75444]:             "iostat",
Nov 26 11:38:13 compute-0 busy_jang[75444]:             "nfs",
Nov 26 11:38:13 compute-0 busy_jang[75444]:             "restful"
Nov 26 11:38:13 compute-0 busy_jang[75444]:         ],
Nov 26 11:38:13 compute-0 busy_jang[75444]:         "services": {}
Nov 26 11:38:13 compute-0 busy_jang[75444]:     },
Nov 26 11:38:13 compute-0 busy_jang[75444]:     "servicemap": {
Nov 26 11:38:13 compute-0 busy_jang[75444]:         "epoch": 1,
Nov 26 11:38:13 compute-0 busy_jang[75444]:         "modified": "2025-11-26T11:37:59.453002+0000",
Nov 26 11:38:13 compute-0 busy_jang[75444]:         "services": {}
Nov 26 11:38:13 compute-0 busy_jang[75444]:     },
Nov 26 11:38:13 compute-0 busy_jang[75444]:     "progress_events": {}
Nov 26 11:38:13 compute-0 busy_jang[75444]: }
Nov 26 11:38:13 compute-0 systemd[1]: libpod-b53a8be908f07215d80089c8cdb9e0e9f86b2538195f9c90e00d438a848aaefa.scope: Deactivated successfully.
Nov 26 11:38:13 compute-0 podman[75430]: 2025-11-26 11:38:13.227690494 +0000 UTC m=+0.393850206 container died b53a8be908f07215d80089c8cdb9e0e9f86b2538195f9c90e00d438a848aaefa (image=quay.io/ceph/ceph:v18, name=busy_jang, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:38:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-5cca9c0bb319c4b9106a6351239ed9fa79bd2385009ca23b62f31686a0e73d92-merged.mount: Deactivated successfully.
Nov 26 11:38:13 compute-0 podman[75430]: 2025-11-26 11:38:13.248771041 +0000 UTC m=+0.414930732 container remove b53a8be908f07215d80089c8cdb9e0e9f86b2538195f9c90e00d438a848aaefa (image=quay.io/ceph/ceph:v18, name=busy_jang, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True)
Nov 26 11:38:13 compute-0 systemd[1]: libpod-conmon-b53a8be908f07215d80089c8cdb9e0e9f86b2538195f9c90e00d438a848aaefa.scope: Deactivated successfully.
Nov 26 11:38:13 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/815257502' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 26 11:38:13 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-mwrktr[75193]: 2025-11-26T11:38:13.393+0000 7f41bcb38140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 26 11:38:13 compute-0 ceph-mgr[75197]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 26 11:38:13 compute-0 ceph-mgr[75197]: mgr[py] Loading python module 'rbd_support'
Nov 26 11:38:13 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-mwrktr[75193]: 2025-11-26T11:38:13.649+0000 7f41bcb38140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 26 11:38:13 compute-0 ceph-mgr[75197]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 26 11:38:13 compute-0 ceph-mgr[75197]: mgr[py] Loading python module 'restful'
Nov 26 11:38:14 compute-0 ceph-mgr[75197]: mgr[py] Loading python module 'rgw'
Nov 26 11:38:14 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-mwrktr[75193]: 2025-11-26T11:38:14.860+0000 7f41bcb38140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 26 11:38:14 compute-0 ceph-mgr[75197]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 26 11:38:14 compute-0 ceph-mgr[75197]: mgr[py] Loading python module 'rook'
Nov 26 11:38:15 compute-0 podman[75480]: 2025-11-26 11:38:15.290159878 +0000 UTC m=+0.025130455 container create e363a9b887068af0496ddcf0633788a34630ed20675dbf454980cf2a49c54dd1 (image=quay.io/ceph/ceph:v18, name=nervous_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:38:15 compute-0 systemd[1]: Started libpod-conmon-e363a9b887068af0496ddcf0633788a34630ed20675dbf454980cf2a49c54dd1.scope.
Nov 26 11:38:15 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:38:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50aa45bb7c54f34b260f43c55aa7d8d1d8e14ffb5bcb11235ee1d03deaaca529/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50aa45bb7c54f34b260f43c55aa7d8d1d8e14ffb5bcb11235ee1d03deaaca529/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50aa45bb7c54f34b260f43c55aa7d8d1d8e14ffb5bcb11235ee1d03deaaca529/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:15 compute-0 podman[75480]: 2025-11-26 11:38:15.342947265 +0000 UTC m=+0.077917841 container init e363a9b887068af0496ddcf0633788a34630ed20675dbf454980cf2a49c54dd1 (image=quay.io/ceph/ceph:v18, name=nervous_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:38:15 compute-0 podman[75480]: 2025-11-26 11:38:15.346630683 +0000 UTC m=+0.081601260 container start e363a9b887068af0496ddcf0633788a34630ed20675dbf454980cf2a49c54dd1 (image=quay.io/ceph/ceph:v18, name=nervous_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 26 11:38:15 compute-0 podman[75480]: 2025-11-26 11:38:15.347771886 +0000 UTC m=+0.082742463 container attach e363a9b887068af0496ddcf0633788a34630ed20675dbf454980cf2a49c54dd1 (image=quay.io/ceph/ceph:v18, name=nervous_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 26 11:38:15 compute-0 podman[75480]: 2025-11-26 11:38:15.279798011 +0000 UTC m=+0.014768609 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:38:15 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 26 11:38:15 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1128453674' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 26 11:38:15 compute-0 nervous_nobel[75493]: 
Nov 26 11:38:15 compute-0 nervous_nobel[75493]: {
Nov 26 11:38:15 compute-0 nervous_nobel[75493]:     "fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:38:15 compute-0 nervous_nobel[75493]:     "health": {
Nov 26 11:38:15 compute-0 nervous_nobel[75493]:         "status": "HEALTH_OK",
Nov 26 11:38:15 compute-0 nervous_nobel[75493]:         "checks": {},
Nov 26 11:38:15 compute-0 nervous_nobel[75493]:         "mutes": []
Nov 26 11:38:15 compute-0 nervous_nobel[75493]:     },
Nov 26 11:38:15 compute-0 nervous_nobel[75493]:     "election_epoch": 5,
Nov 26 11:38:15 compute-0 nervous_nobel[75493]:     "quorum": [
Nov 26 11:38:15 compute-0 nervous_nobel[75493]:         0
Nov 26 11:38:15 compute-0 nervous_nobel[75493]:     ],
Nov 26 11:38:15 compute-0 nervous_nobel[75493]:     "quorum_names": [
Nov 26 11:38:15 compute-0 nervous_nobel[75493]:         "compute-0"
Nov 26 11:38:15 compute-0 nervous_nobel[75493]:     ],
Nov 26 11:38:15 compute-0 nervous_nobel[75493]:     "quorum_age": 14,
Nov 26 11:38:15 compute-0 nervous_nobel[75493]:     "monmap": {
Nov 26 11:38:15 compute-0 nervous_nobel[75493]:         "epoch": 1,
Nov 26 11:38:15 compute-0 nervous_nobel[75493]:         "min_mon_release_name": "reef",
Nov 26 11:38:15 compute-0 nervous_nobel[75493]:         "num_mons": 1
Nov 26 11:38:15 compute-0 nervous_nobel[75493]:     },
Nov 26 11:38:15 compute-0 nervous_nobel[75493]:     "osdmap": {
Nov 26 11:38:15 compute-0 nervous_nobel[75493]:         "epoch": 1,
Nov 26 11:38:15 compute-0 nervous_nobel[75493]:         "num_osds": 0,
Nov 26 11:38:15 compute-0 nervous_nobel[75493]:         "num_up_osds": 0,
Nov 26 11:38:15 compute-0 nervous_nobel[75493]:         "osd_up_since": 0,
Nov 26 11:38:15 compute-0 nervous_nobel[75493]:         "num_in_osds": 0,
Nov 26 11:38:15 compute-0 nervous_nobel[75493]:         "osd_in_since": 0,
Nov 26 11:38:15 compute-0 nervous_nobel[75493]:         "num_remapped_pgs": 0
Nov 26 11:38:15 compute-0 nervous_nobel[75493]:     },
Nov 26 11:38:15 compute-0 nervous_nobel[75493]:     "pgmap": {
Nov 26 11:38:15 compute-0 nervous_nobel[75493]:         "pgs_by_state": [],
Nov 26 11:38:15 compute-0 nervous_nobel[75493]:         "num_pgs": 0,
Nov 26 11:38:15 compute-0 nervous_nobel[75493]:         "num_pools": 0,
Nov 26 11:38:15 compute-0 nervous_nobel[75493]:         "num_objects": 0,
Nov 26 11:38:15 compute-0 nervous_nobel[75493]:         "data_bytes": 0,
Nov 26 11:38:15 compute-0 nervous_nobel[75493]:         "bytes_used": 0,
Nov 26 11:38:15 compute-0 nervous_nobel[75493]:         "bytes_avail": 0,
Nov 26 11:38:15 compute-0 nervous_nobel[75493]:         "bytes_total": 0
Nov 26 11:38:15 compute-0 nervous_nobel[75493]:     },
Nov 26 11:38:15 compute-0 nervous_nobel[75493]:     "fsmap": {
Nov 26 11:38:15 compute-0 nervous_nobel[75493]:         "epoch": 1,
Nov 26 11:38:15 compute-0 nervous_nobel[75493]:         "by_rank": [],
Nov 26 11:38:15 compute-0 nervous_nobel[75493]:         "up:standby": 0
Nov 26 11:38:15 compute-0 nervous_nobel[75493]:     },
Nov 26 11:38:15 compute-0 nervous_nobel[75493]:     "mgrmap": {
Nov 26 11:38:15 compute-0 nervous_nobel[75493]:         "available": false,
Nov 26 11:38:15 compute-0 nervous_nobel[75493]:         "num_standbys": 0,
Nov 26 11:38:15 compute-0 nervous_nobel[75493]:         "modules": [
Nov 26 11:38:15 compute-0 nervous_nobel[75493]:             "iostat",
Nov 26 11:38:15 compute-0 nervous_nobel[75493]:             "nfs",
Nov 26 11:38:15 compute-0 nervous_nobel[75493]:             "restful"
Nov 26 11:38:15 compute-0 nervous_nobel[75493]:         ],
Nov 26 11:38:15 compute-0 nervous_nobel[75493]:         "services": {}
Nov 26 11:38:15 compute-0 nervous_nobel[75493]:     },
Nov 26 11:38:15 compute-0 nervous_nobel[75493]:     "servicemap": {
Nov 26 11:38:15 compute-0 nervous_nobel[75493]:         "epoch": 1,
Nov 26 11:38:15 compute-0 nervous_nobel[75493]:         "modified": "2025-11-26T11:37:59.453002+0000",
Nov 26 11:38:15 compute-0 nervous_nobel[75493]:         "services": {}
Nov 26 11:38:15 compute-0 nervous_nobel[75493]:     },
Nov 26 11:38:15 compute-0 nervous_nobel[75493]:     "progress_events": {}
Nov 26 11:38:15 compute-0 nervous_nobel[75493]: }
Nov 26 11:38:15 compute-0 systemd[1]: libpod-e363a9b887068af0496ddcf0633788a34630ed20675dbf454980cf2a49c54dd1.scope: Deactivated successfully.
Nov 26 11:38:15 compute-0 conmon[75493]: conmon e363a9b887068af0496d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e363a9b887068af0496ddcf0633788a34630ed20675dbf454980cf2a49c54dd1.scope/container/memory.events
Nov 26 11:38:15 compute-0 podman[75480]: 2025-11-26 11:38:15.66839029 +0000 UTC m=+0.403360866 container died e363a9b887068af0496ddcf0633788a34630ed20675dbf454980cf2a49c54dd1 (image=quay.io/ceph/ceph:v18, name=nervous_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 26 11:38:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-50aa45bb7c54f34b260f43c55aa7d8d1d8e14ffb5bcb11235ee1d03deaaca529-merged.mount: Deactivated successfully.
Nov 26 11:38:15 compute-0 podman[75480]: 2025-11-26 11:38:15.692586462 +0000 UTC m=+0.427557040 container remove e363a9b887068af0496ddcf0633788a34630ed20675dbf454980cf2a49c54dd1 (image=quay.io/ceph/ceph:v18, name=nervous_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 26 11:38:15 compute-0 systemd[1]: libpod-conmon-e363a9b887068af0496ddcf0633788a34630ed20675dbf454980cf2a49c54dd1.scope: Deactivated successfully.
Nov 26 11:38:15 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1128453674' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 26 11:38:16 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-mwrktr[75193]: 2025-11-26T11:38:16.628+0000 7f41bcb38140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 26 11:38:16 compute-0 ceph-mgr[75197]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 26 11:38:16 compute-0 ceph-mgr[75197]: mgr[py] Loading python module 'selftest'
Nov 26 11:38:16 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-mwrktr[75193]: 2025-11-26T11:38:16.839+0000 7f41bcb38140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 26 11:38:16 compute-0 ceph-mgr[75197]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 26 11:38:16 compute-0 ceph-mgr[75197]: mgr[py] Loading python module 'snap_schedule'
Nov 26 11:38:17 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-mwrktr[75193]: 2025-11-26T11:38:17.055+0000 7f41bcb38140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 26 11:38:17 compute-0 ceph-mgr[75197]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 26 11:38:17 compute-0 ceph-mgr[75197]: mgr[py] Loading python module 'stats'
Nov 26 11:38:17 compute-0 ceph-mgr[75197]: mgr[py] Loading python module 'status'
Nov 26 11:38:17 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-mwrktr[75193]: 2025-11-26T11:38:17.493+0000 7f41bcb38140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 26 11:38:17 compute-0 ceph-mgr[75197]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 26 11:38:17 compute-0 ceph-mgr[75197]: mgr[py] Loading python module 'telegraf'
Nov 26 11:38:17 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-mwrktr[75193]: 2025-11-26T11:38:17.699+0000 7f41bcb38140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 26 11:38:17 compute-0 ceph-mgr[75197]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 26 11:38:17 compute-0 ceph-mgr[75197]: mgr[py] Loading python module 'telemetry'
Nov 26 11:38:17 compute-0 podman[75530]: 2025-11-26 11:38:17.734538894 +0000 UTC m=+0.026088073 container create 99ee56a1eb5b694af74913fbcffb265a3ca27f7edb02c52091a344534894fac2 (image=quay.io/ceph/ceph:v18, name=exciting_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:38:17 compute-0 systemd[1]: Started libpod-conmon-99ee56a1eb5b694af74913fbcffb265a3ca27f7edb02c52091a344534894fac2.scope.
Nov 26 11:38:17 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:38:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7a0297634f4ade1dc63962da76094d97a2c55d2919fa04af3555a6a45e0d9af/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7a0297634f4ade1dc63962da76094d97a2c55d2919fa04af3555a6a45e0d9af/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7a0297634f4ade1dc63962da76094d97a2c55d2919fa04af3555a6a45e0d9af/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:17 compute-0 podman[75530]: 2025-11-26 11:38:17.780400706 +0000 UTC m=+0.071949875 container init 99ee56a1eb5b694af74913fbcffb265a3ca27f7edb02c52091a344534894fac2 (image=quay.io/ceph/ceph:v18, name=exciting_goodall, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 26 11:38:17 compute-0 podman[75530]: 2025-11-26 11:38:17.78433594 +0000 UTC m=+0.075885108 container start 99ee56a1eb5b694af74913fbcffb265a3ca27f7edb02c52091a344534894fac2 (image=quay.io/ceph/ceph:v18, name=exciting_goodall, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:38:17 compute-0 podman[75530]: 2025-11-26 11:38:17.785355463 +0000 UTC m=+0.076904622 container attach 99ee56a1eb5b694af74913fbcffb265a3ca27f7edb02c52091a344534894fac2 (image=quay.io/ceph/ceph:v18, name=exciting_goodall, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:38:17 compute-0 podman[75530]: 2025-11-26 11:38:17.723666535 +0000 UTC m=+0.015215723 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:38:18 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 26 11:38:18 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2571330682' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 26 11:38:18 compute-0 exciting_goodall[75544]: 
Nov 26 11:38:18 compute-0 exciting_goodall[75544]: {
Nov 26 11:38:18 compute-0 exciting_goodall[75544]:     "fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:38:18 compute-0 exciting_goodall[75544]:     "health": {
Nov 26 11:38:18 compute-0 exciting_goodall[75544]:         "status": "HEALTH_OK",
Nov 26 11:38:18 compute-0 exciting_goodall[75544]:         "checks": {},
Nov 26 11:38:18 compute-0 exciting_goodall[75544]:         "mutes": []
Nov 26 11:38:18 compute-0 exciting_goodall[75544]:     },
Nov 26 11:38:18 compute-0 exciting_goodall[75544]:     "election_epoch": 5,
Nov 26 11:38:18 compute-0 exciting_goodall[75544]:     "quorum": [
Nov 26 11:38:18 compute-0 exciting_goodall[75544]:         0
Nov 26 11:38:18 compute-0 exciting_goodall[75544]:     ],
Nov 26 11:38:18 compute-0 exciting_goodall[75544]:     "quorum_names": [
Nov 26 11:38:18 compute-0 exciting_goodall[75544]:         "compute-0"
Nov 26 11:38:18 compute-0 exciting_goodall[75544]:     ],
Nov 26 11:38:18 compute-0 exciting_goodall[75544]:     "quorum_age": 16,
Nov 26 11:38:18 compute-0 exciting_goodall[75544]:     "monmap": {
Nov 26 11:38:18 compute-0 exciting_goodall[75544]:         "epoch": 1,
Nov 26 11:38:18 compute-0 exciting_goodall[75544]:         "min_mon_release_name": "reef",
Nov 26 11:38:18 compute-0 exciting_goodall[75544]:         "num_mons": 1
Nov 26 11:38:18 compute-0 exciting_goodall[75544]:     },
Nov 26 11:38:18 compute-0 exciting_goodall[75544]:     "osdmap": {
Nov 26 11:38:18 compute-0 exciting_goodall[75544]:         "epoch": 1,
Nov 26 11:38:18 compute-0 exciting_goodall[75544]:         "num_osds": 0,
Nov 26 11:38:18 compute-0 exciting_goodall[75544]:         "num_up_osds": 0,
Nov 26 11:38:18 compute-0 exciting_goodall[75544]:         "osd_up_since": 0,
Nov 26 11:38:18 compute-0 exciting_goodall[75544]:         "num_in_osds": 0,
Nov 26 11:38:18 compute-0 exciting_goodall[75544]:         "osd_in_since": 0,
Nov 26 11:38:18 compute-0 exciting_goodall[75544]:         "num_remapped_pgs": 0
Nov 26 11:38:18 compute-0 exciting_goodall[75544]:     },
Nov 26 11:38:18 compute-0 exciting_goodall[75544]:     "pgmap": {
Nov 26 11:38:18 compute-0 exciting_goodall[75544]:         "pgs_by_state": [],
Nov 26 11:38:18 compute-0 exciting_goodall[75544]:         "num_pgs": 0,
Nov 26 11:38:18 compute-0 exciting_goodall[75544]:         "num_pools": 0,
Nov 26 11:38:18 compute-0 exciting_goodall[75544]:         "num_objects": 0,
Nov 26 11:38:18 compute-0 exciting_goodall[75544]:         "data_bytes": 0,
Nov 26 11:38:18 compute-0 exciting_goodall[75544]:         "bytes_used": 0,
Nov 26 11:38:18 compute-0 exciting_goodall[75544]:         "bytes_avail": 0,
Nov 26 11:38:18 compute-0 exciting_goodall[75544]:         "bytes_total": 0
Nov 26 11:38:18 compute-0 exciting_goodall[75544]:     },
Nov 26 11:38:18 compute-0 exciting_goodall[75544]:     "fsmap": {
Nov 26 11:38:18 compute-0 exciting_goodall[75544]:         "epoch": 1,
Nov 26 11:38:18 compute-0 exciting_goodall[75544]:         "by_rank": [],
Nov 26 11:38:18 compute-0 exciting_goodall[75544]:         "up:standby": 0
Nov 26 11:38:18 compute-0 exciting_goodall[75544]:     },
Nov 26 11:38:18 compute-0 exciting_goodall[75544]:     "mgrmap": {
Nov 26 11:38:18 compute-0 exciting_goodall[75544]:         "available": false,
Nov 26 11:38:18 compute-0 exciting_goodall[75544]:         "num_standbys": 0,
Nov 26 11:38:18 compute-0 exciting_goodall[75544]:         "modules": [
Nov 26 11:38:18 compute-0 exciting_goodall[75544]:             "iostat",
Nov 26 11:38:18 compute-0 exciting_goodall[75544]:             "nfs",
Nov 26 11:38:18 compute-0 exciting_goodall[75544]:             "restful"
Nov 26 11:38:18 compute-0 exciting_goodall[75544]:         ],
Nov 26 11:38:18 compute-0 exciting_goodall[75544]:         "services": {}
Nov 26 11:38:18 compute-0 exciting_goodall[75544]:     },
Nov 26 11:38:18 compute-0 exciting_goodall[75544]:     "servicemap": {
Nov 26 11:38:18 compute-0 exciting_goodall[75544]:         "epoch": 1,
Nov 26 11:38:18 compute-0 exciting_goodall[75544]:         "modified": "2025-11-26T11:37:59.453002+0000",
Nov 26 11:38:18 compute-0 exciting_goodall[75544]:         "services": {}
Nov 26 11:38:18 compute-0 exciting_goodall[75544]:     },
Nov 26 11:38:18 compute-0 exciting_goodall[75544]:     "progress_events": {}
Nov 26 11:38:18 compute-0 exciting_goodall[75544]: }
Nov 26 11:38:18 compute-0 systemd[1]: libpod-99ee56a1eb5b694af74913fbcffb265a3ca27f7edb02c52091a344534894fac2.scope: Deactivated successfully.
Nov 26 11:38:18 compute-0 podman[75530]: 2025-11-26 11:38:18.102809999 +0000 UTC m=+0.394359168 container died 99ee56a1eb5b694af74913fbcffb265a3ca27f7edb02c52091a344534894fac2 (image=quay.io/ceph/ceph:v18, name=exciting_goodall, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 26 11:38:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-c7a0297634f4ade1dc63962da76094d97a2c55d2919fa04af3555a6a45e0d9af-merged.mount: Deactivated successfully.
Nov 26 11:38:18 compute-0 podman[75530]: 2025-11-26 11:38:18.133913542 +0000 UTC m=+0.425462711 container remove 99ee56a1eb5b694af74913fbcffb265a3ca27f7edb02c52091a344534894fac2 (image=quay.io/ceph/ceph:v18, name=exciting_goodall, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 26 11:38:18 compute-0 systemd[1]: libpod-conmon-99ee56a1eb5b694af74913fbcffb265a3ca27f7edb02c52091a344534894fac2.scope: Deactivated successfully.
Nov 26 11:38:18 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/2571330682' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 26 11:38:18 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-mwrktr[75193]: 2025-11-26T11:38:18.224+0000 7f41bcb38140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 26 11:38:18 compute-0 ceph-mgr[75197]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 26 11:38:18 compute-0 ceph-mgr[75197]: mgr[py] Loading python module 'test_orchestrator'
Nov 26 11:38:18 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-mwrktr[75193]: 2025-11-26T11:38:18.802+0000 7f41bcb38140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 26 11:38:18 compute-0 ceph-mgr[75197]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 26 11:38:18 compute-0 ceph-mgr[75197]: mgr[py] Loading python module 'volumes'
Nov 26 11:38:19 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-mwrktr[75193]: 2025-11-26T11:38:19.410+0000 7f41bcb38140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 26 11:38:19 compute-0 ceph-mgr[75197]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 26 11:38:19 compute-0 ceph-mgr[75197]: mgr[py] Loading python module 'zabbix'
Nov 26 11:38:19 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-mwrktr[75193]: 2025-11-26T11:38:19.615+0000 7f41bcb38140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 26 11:38:19 compute-0 ceph-mgr[75197]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 26 11:38:19 compute-0 ceph-mgr[75197]: ms_deliver_dispatch: unhandled message 0x561571a2f1e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Nov 26 11:38:19 compute-0 ceph-mon[74928]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.mwrktr
Nov 26 11:38:19 compute-0 ceph-mgr[75197]: mgr handle_mgr_map Activating!
Nov 26 11:38:19 compute-0 ceph-mgr[75197]: mgr handle_mgr_map I am now activating
Nov 26 11:38:19 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.mwrktr(active, starting, since 0.00509874s)
Nov 26 11:38:19 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Nov 26 11:38:19 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3349817317' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 26 11:38:19 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).mds e1 all = 1
Nov 26 11:38:19 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 26 11:38:19 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3349817317' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 26 11:38:19 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Nov 26 11:38:19 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3349817317' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 26 11:38:19 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 26 11:38:19 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3349817317' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 26 11:38:19 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.mwrktr", "id": "compute-0.mwrktr"} v 0) v1
Nov 26 11:38:19 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3349817317' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "mgr metadata", "who": "compute-0.mwrktr", "id": "compute-0.mwrktr"}]: dispatch
Nov 26 11:38:19 compute-0 ceph-mgr[75197]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 11:38:19 compute-0 ceph-mgr[75197]: mgr load Constructed class from module: balancer
Nov 26 11:38:19 compute-0 ceph-mgr[75197]: [balancer INFO root] Starting
Nov 26 11:38:19 compute-0 ceph-mgr[75197]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 11:38:19 compute-0 ceph-mgr[75197]: mgr load Constructed class from module: crash
Nov 26 11:38:19 compute-0 ceph-mgr[75197]: [balancer INFO root] Optimize plan auto_2025-11-26_11:38:19
Nov 26 11:38:19 compute-0 ceph-mgr[75197]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 11:38:19 compute-0 ceph-mgr[75197]: [balancer INFO root] do_upmap
Nov 26 11:38:19 compute-0 ceph-mgr[75197]: [balancer INFO root] No pools available
Nov 26 11:38:19 compute-0 ceph-mon[74928]: log_channel(cluster) log [INF] : Manager daemon compute-0.mwrktr is now available
Nov 26 11:38:19 compute-0 ceph-mgr[75197]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 11:38:19 compute-0 ceph-mgr[75197]: mgr load Constructed class from module: devicehealth
Nov 26 11:38:19 compute-0 ceph-mgr[75197]: [devicehealth INFO root] Starting
Nov 26 11:38:19 compute-0 ceph-mgr[75197]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 11:38:19 compute-0 ceph-mgr[75197]: mgr load Constructed class from module: iostat
Nov 26 11:38:19 compute-0 ceph-mgr[75197]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 11:38:19 compute-0 ceph-mgr[75197]: mgr load Constructed class from module: nfs
Nov 26 11:38:19 compute-0 ceph-mgr[75197]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 11:38:19 compute-0 ceph-mgr[75197]: mgr load Constructed class from module: orchestrator
Nov 26 11:38:19 compute-0 ceph-mgr[75197]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 11:38:19 compute-0 ceph-mgr[75197]: mgr load Constructed class from module: pg_autoscaler
Nov 26 11:38:19 compute-0 ceph-mgr[75197]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 11:38:19 compute-0 ceph-mgr[75197]: mgr load Constructed class from module: progress
Nov 26 11:38:19 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 11:38:19 compute-0 ceph-mgr[75197]: [progress INFO root] Loading...
Nov 26 11:38:19 compute-0 ceph-mgr[75197]: [progress INFO root] No stored events to load
Nov 26 11:38:19 compute-0 ceph-mgr[75197]: [progress INFO root] Loaded [] historic events
Nov 26 11:38:19 compute-0 ceph-mgr[75197]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 11:38:19 compute-0 ceph-mgr[75197]: [progress INFO root] Loaded OSDMap, ready.
Nov 26 11:38:19 compute-0 ceph-mgr[75197]: [rbd_support INFO root] recovery thread starting
Nov 26 11:38:19 compute-0 ceph-mgr[75197]: [rbd_support INFO root] starting setup
Nov 26 11:38:19 compute-0 ceph-mgr[75197]: mgr load Constructed class from module: rbd_support
Nov 26 11:38:19 compute-0 ceph-mgr[75197]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 11:38:19 compute-0 ceph-mgr[75197]: mgr load Constructed class from module: restful
Nov 26 11:38:19 compute-0 ceph-mgr[75197]: [restful INFO root] server_addr: :: server_port: 8003
Nov 26 11:38:19 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mwrktr/mirror_snapshot_schedule"} v 0) v1
Nov 26 11:38:19 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3349817317' entity='mgr.compute-0.mwrktr' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mwrktr/mirror_snapshot_schedule"}]: dispatch
Nov 26 11:38:19 compute-0 ceph-mgr[75197]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 11:38:19 compute-0 ceph-mgr[75197]: mgr load Constructed class from module: status
Nov 26 11:38:19 compute-0 ceph-mgr[75197]: [restful WARNING root] server not running: no certificate configured
Nov 26 11:38:19 compute-0 ceph-mgr[75197]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 11:38:19 compute-0 ceph-mgr[75197]: mgr load Constructed class from module: telemetry
Nov 26 11:38:19 compute-0 ceph-mgr[75197]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 11:38:19 compute-0 ceph-mgr[75197]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 11:38:19 compute-0 ceph-mgr[75197]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Nov 26 11:38:19 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0) v1
Nov 26 11:38:19 compute-0 ceph-mgr[75197]: [rbd_support INFO root] PerfHandler: starting
Nov 26 11:38:19 compute-0 ceph-mgr[75197]: [rbd_support INFO root] TaskHandler: starting
Nov 26 11:38:19 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mwrktr/trash_purge_schedule"} v 0) v1
Nov 26 11:38:19 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3349817317' entity='mgr.compute-0.mwrktr' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mwrktr/trash_purge_schedule"}]: dispatch
Nov 26 11:38:19 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3349817317' entity='mgr.compute-0.mwrktr' 
Nov 26 11:38:19 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0) v1
Nov 26 11:38:19 compute-0 ceph-mgr[75197]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 11:38:19 compute-0 ceph-mgr[75197]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Nov 26 11:38:19 compute-0 ceph-mgr[75197]: [rbd_support INFO root] setup complete
Nov 26 11:38:19 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3349817317' entity='mgr.compute-0.mwrktr' 
Nov 26 11:38:19 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0) v1
Nov 26 11:38:19 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3349817317' entity='mgr.compute-0.mwrktr' 
Nov 26 11:38:19 compute-0 ceph-mgr[75197]: mgr load Constructed class from module: volumes
Nov 26 11:38:19 compute-0 ceph-mon[74928]: Activating manager daemon compute-0.mwrktr
Nov 26 11:38:19 compute-0 ceph-mon[74928]: mgrmap e2: compute-0.mwrktr(active, starting, since 0.00509874s)
Nov 26 11:38:19 compute-0 ceph-mon[74928]: from='mgr.14102 192.168.122.100:0/3349817317' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 26 11:38:19 compute-0 ceph-mon[74928]: from='mgr.14102 192.168.122.100:0/3349817317' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 26 11:38:19 compute-0 ceph-mon[74928]: from='mgr.14102 192.168.122.100:0/3349817317' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 26 11:38:19 compute-0 ceph-mon[74928]: from='mgr.14102 192.168.122.100:0/3349817317' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 26 11:38:19 compute-0 ceph-mon[74928]: from='mgr.14102 192.168.122.100:0/3349817317' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "mgr metadata", "who": "compute-0.mwrktr", "id": "compute-0.mwrktr"}]: dispatch
Nov 26 11:38:19 compute-0 ceph-mon[74928]: Manager daemon compute-0.mwrktr is now available
Nov 26 11:38:19 compute-0 ceph-mon[74928]: from='mgr.14102 192.168.122.100:0/3349817317' entity='mgr.compute-0.mwrktr' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mwrktr/mirror_snapshot_schedule"}]: dispatch
Nov 26 11:38:19 compute-0 ceph-mon[74928]: from='mgr.14102 192.168.122.100:0/3349817317' entity='mgr.compute-0.mwrktr' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mwrktr/trash_purge_schedule"}]: dispatch
Nov 26 11:38:19 compute-0 ceph-mon[74928]: from='mgr.14102 192.168.122.100:0/3349817317' entity='mgr.compute-0.mwrktr' 
Nov 26 11:38:19 compute-0 ceph-mon[74928]: from='mgr.14102 192.168.122.100:0/3349817317' entity='mgr.compute-0.mwrktr' 
Nov 26 11:38:19 compute-0 ceph-mon[74928]: from='mgr.14102 192.168.122.100:0/3349817317' entity='mgr.compute-0.mwrktr' 
Nov 26 11:38:20 compute-0 podman[75658]: 2025-11-26 11:38:20.183730067 +0000 UTC m=+0.033403743 container create f56790e8a06707e3ea7ef23966e650bf6428c7d768807a7420dabc1363093f25 (image=quay.io/ceph/ceph:v18, name=gifted_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:38:20 compute-0 systemd[1]: Started libpod-conmon-f56790e8a06707e3ea7ef23966e650bf6428c7d768807a7420dabc1363093f25.scope.
Nov 26 11:38:20 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:38:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2a6c95bfc12cc4c01156b985557186fa117567684f6ada7cf4719fab7a7eb36/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2a6c95bfc12cc4c01156b985557186fa117567684f6ada7cf4719fab7a7eb36/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2a6c95bfc12cc4c01156b985557186fa117567684f6ada7cf4719fab7a7eb36/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:20 compute-0 podman[75658]: 2025-11-26 11:38:20.240930879 +0000 UTC m=+0.090604565 container init f56790e8a06707e3ea7ef23966e650bf6428c7d768807a7420dabc1363093f25 (image=quay.io/ceph/ceph:v18, name=gifted_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 26 11:38:20 compute-0 podman[75658]: 2025-11-26 11:38:20.24717687 +0000 UTC m=+0.096850557 container start f56790e8a06707e3ea7ef23966e650bf6428c7d768807a7420dabc1363093f25 (image=quay.io/ceph/ceph:v18, name=gifted_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:38:20 compute-0 podman[75658]: 2025-11-26 11:38:20.248463047 +0000 UTC m=+0.098136734 container attach f56790e8a06707e3ea7ef23966e650bf6428c7d768807a7420dabc1363093f25 (image=quay.io/ceph/ceph:v18, name=gifted_tesla, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:38:20 compute-0 podman[75658]: 2025-11-26 11:38:20.170304201 +0000 UTC m=+0.019977907 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:38:20 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 26 11:38:20 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2985214583' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 26 11:38:20 compute-0 gifted_tesla[75671]: 
Nov 26 11:38:20 compute-0 gifted_tesla[75671]: {
Nov 26 11:38:20 compute-0 gifted_tesla[75671]:     "fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:38:20 compute-0 gifted_tesla[75671]:     "health": {
Nov 26 11:38:20 compute-0 gifted_tesla[75671]:         "status": "HEALTH_OK",
Nov 26 11:38:20 compute-0 gifted_tesla[75671]:         "checks": {},
Nov 26 11:38:20 compute-0 gifted_tesla[75671]:         "mutes": []
Nov 26 11:38:20 compute-0 gifted_tesla[75671]:     },
Nov 26 11:38:20 compute-0 gifted_tesla[75671]:     "election_epoch": 5,
Nov 26 11:38:20 compute-0 gifted_tesla[75671]:     "quorum": [
Nov 26 11:38:20 compute-0 gifted_tesla[75671]:         0
Nov 26 11:38:20 compute-0 gifted_tesla[75671]:     ],
Nov 26 11:38:20 compute-0 gifted_tesla[75671]:     "quorum_names": [
Nov 26 11:38:20 compute-0 gifted_tesla[75671]:         "compute-0"
Nov 26 11:38:20 compute-0 gifted_tesla[75671]:     ],
Nov 26 11:38:20 compute-0 gifted_tesla[75671]:     "quorum_age": 19,
Nov 26 11:38:20 compute-0 gifted_tesla[75671]:     "monmap": {
Nov 26 11:38:20 compute-0 gifted_tesla[75671]:         "epoch": 1,
Nov 26 11:38:20 compute-0 gifted_tesla[75671]:         "min_mon_release_name": "reef",
Nov 26 11:38:20 compute-0 gifted_tesla[75671]:         "num_mons": 1
Nov 26 11:38:20 compute-0 gifted_tesla[75671]:     },
Nov 26 11:38:20 compute-0 gifted_tesla[75671]:     "osdmap": {
Nov 26 11:38:20 compute-0 gifted_tesla[75671]:         "epoch": 1,
Nov 26 11:38:20 compute-0 gifted_tesla[75671]:         "num_osds": 0,
Nov 26 11:38:20 compute-0 gifted_tesla[75671]:         "num_up_osds": 0,
Nov 26 11:38:20 compute-0 gifted_tesla[75671]:         "osd_up_since": 0,
Nov 26 11:38:20 compute-0 gifted_tesla[75671]:         "num_in_osds": 0,
Nov 26 11:38:20 compute-0 gifted_tesla[75671]:         "osd_in_since": 0,
Nov 26 11:38:20 compute-0 gifted_tesla[75671]:         "num_remapped_pgs": 0
Nov 26 11:38:20 compute-0 gifted_tesla[75671]:     },
Nov 26 11:38:20 compute-0 gifted_tesla[75671]:     "pgmap": {
Nov 26 11:38:20 compute-0 gifted_tesla[75671]:         "pgs_by_state": [],
Nov 26 11:38:20 compute-0 gifted_tesla[75671]:         "num_pgs": 0,
Nov 26 11:38:20 compute-0 gifted_tesla[75671]:         "num_pools": 0,
Nov 26 11:38:20 compute-0 gifted_tesla[75671]:         "num_objects": 0,
Nov 26 11:38:20 compute-0 gifted_tesla[75671]:         "data_bytes": 0,
Nov 26 11:38:20 compute-0 gifted_tesla[75671]:         "bytes_used": 0,
Nov 26 11:38:20 compute-0 gifted_tesla[75671]:         "bytes_avail": 0,
Nov 26 11:38:20 compute-0 gifted_tesla[75671]:         "bytes_total": 0
Nov 26 11:38:20 compute-0 gifted_tesla[75671]:     },
Nov 26 11:38:20 compute-0 gifted_tesla[75671]:     "fsmap": {
Nov 26 11:38:20 compute-0 gifted_tesla[75671]:         "epoch": 1,
Nov 26 11:38:20 compute-0 gifted_tesla[75671]:         "by_rank": [],
Nov 26 11:38:20 compute-0 gifted_tesla[75671]:         "up:standby": 0
Nov 26 11:38:20 compute-0 gifted_tesla[75671]:     },
Nov 26 11:38:20 compute-0 gifted_tesla[75671]:     "mgrmap": {
Nov 26 11:38:20 compute-0 gifted_tesla[75671]:         "available": false,
Nov 26 11:38:20 compute-0 gifted_tesla[75671]:         "num_standbys": 0,
Nov 26 11:38:20 compute-0 gifted_tesla[75671]:         "modules": [
Nov 26 11:38:20 compute-0 gifted_tesla[75671]:             "iostat",
Nov 26 11:38:20 compute-0 gifted_tesla[75671]:             "nfs",
Nov 26 11:38:20 compute-0 gifted_tesla[75671]:             "restful"
Nov 26 11:38:20 compute-0 gifted_tesla[75671]:         ],
Nov 26 11:38:20 compute-0 gifted_tesla[75671]:         "services": {}
Nov 26 11:38:20 compute-0 gifted_tesla[75671]:     },
Nov 26 11:38:20 compute-0 gifted_tesla[75671]:     "servicemap": {
Nov 26 11:38:20 compute-0 gifted_tesla[75671]:         "epoch": 1,
Nov 26 11:38:20 compute-0 gifted_tesla[75671]:         "modified": "2025-11-26T11:37:59.453002+0000",
Nov 26 11:38:20 compute-0 gifted_tesla[75671]:         "services": {}
Nov 26 11:38:20 compute-0 gifted_tesla[75671]:     },
Nov 26 11:38:20 compute-0 gifted_tesla[75671]:     "progress_events": {}
Nov 26 11:38:20 compute-0 gifted_tesla[75671]: }
Nov 26 11:38:20 compute-0 systemd[1]: libpod-f56790e8a06707e3ea7ef23966e650bf6428c7d768807a7420dabc1363093f25.scope: Deactivated successfully.
Nov 26 11:38:20 compute-0 podman[75658]: 2025-11-26 11:38:20.582122852 +0000 UTC m=+0.431796558 container died f56790e8a06707e3ea7ef23966e650bf6428c7d768807a7420dabc1363093f25 (image=quay.io/ceph/ceph:v18, name=gifted_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 26 11:38:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-b2a6c95bfc12cc4c01156b985557186fa117567684f6ada7cf4719fab7a7eb36-merged.mount: Deactivated successfully.
Nov 26 11:38:20 compute-0 podman[75658]: 2025-11-26 11:38:20.604304495 +0000 UTC m=+0.453978181 container remove f56790e8a06707e3ea7ef23966e650bf6428c7d768807a7420dabc1363093f25 (image=quay.io/ceph/ceph:v18, name=gifted_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 26 11:38:20 compute-0 systemd[1]: libpod-conmon-f56790e8a06707e3ea7ef23966e650bf6428c7d768807a7420dabc1363093f25.scope: Deactivated successfully.
Nov 26 11:38:20 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.mwrktr(active, since 1.00865s)
Nov 26 11:38:20 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/2985214583' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 26 11:38:20 compute-0 ceph-mon[74928]: mgrmap e3: compute-0.mwrktr(active, since 1.00865s)
Nov 26 11:38:21 compute-0 ceph-mgr[75197]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 26 11:38:21 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.mwrktr(active, since 2s)
Nov 26 11:38:22 compute-0 podman[75707]: 2025-11-26 11:38:22.64594606 +0000 UTC m=+0.024905393 container create dfe3bd36dfbe01535efba820cad5831af9ca85c6fa075a1bd1482ad0e2fee16f (image=quay.io/ceph/ceph:v18, name=romantic_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:38:22 compute-0 systemd[1]: Started libpod-conmon-dfe3bd36dfbe01535efba820cad5831af9ca85c6fa075a1bd1482ad0e2fee16f.scope.
Nov 26 11:38:22 compute-0 ceph-mon[74928]: mgrmap e4: compute-0.mwrktr(active, since 2s)
Nov 26 11:38:22 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:38:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d89815b16dc3d0c31567d0b5801f68b03fa33676f1a481d27e511410cc657cf6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d89815b16dc3d0c31567d0b5801f68b03fa33676f1a481d27e511410cc657cf6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d89815b16dc3d0c31567d0b5801f68b03fa33676f1a481d27e511410cc657cf6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:22 compute-0 podman[75707]: 2025-11-26 11:38:22.705383171 +0000 UTC m=+0.084342513 container init dfe3bd36dfbe01535efba820cad5831af9ca85c6fa075a1bd1482ad0e2fee16f (image=quay.io/ceph/ceph:v18, name=romantic_gould, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:38:22 compute-0 podman[75707]: 2025-11-26 11:38:22.70929517 +0000 UTC m=+0.088254502 container start dfe3bd36dfbe01535efba820cad5831af9ca85c6fa075a1bd1482ad0e2fee16f (image=quay.io/ceph/ceph:v18, name=romantic_gould, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 26 11:38:22 compute-0 podman[75707]: 2025-11-26 11:38:22.710432154 +0000 UTC m=+0.089391506 container attach dfe3bd36dfbe01535efba820cad5831af9ca85c6fa075a1bd1482ad0e2fee16f (image=quay.io/ceph/ceph:v18, name=romantic_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:38:22 compute-0 podman[75707]: 2025-11-26 11:38:22.636100468 +0000 UTC m=+0.015059811 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:38:23 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 26 11:38:23 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1393475235' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 26 11:38:23 compute-0 romantic_gould[75720]: 
Nov 26 11:38:23 compute-0 romantic_gould[75720]: {
Nov 26 11:38:23 compute-0 romantic_gould[75720]:     "fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:38:23 compute-0 romantic_gould[75720]:     "health": {
Nov 26 11:38:23 compute-0 romantic_gould[75720]:         "status": "HEALTH_OK",
Nov 26 11:38:23 compute-0 romantic_gould[75720]:         "checks": {},
Nov 26 11:38:23 compute-0 romantic_gould[75720]:         "mutes": []
Nov 26 11:38:23 compute-0 romantic_gould[75720]:     },
Nov 26 11:38:23 compute-0 romantic_gould[75720]:     "election_epoch": 5,
Nov 26 11:38:23 compute-0 romantic_gould[75720]:     "quorum": [
Nov 26 11:38:23 compute-0 romantic_gould[75720]:         0
Nov 26 11:38:23 compute-0 romantic_gould[75720]:     ],
Nov 26 11:38:23 compute-0 romantic_gould[75720]:     "quorum_names": [
Nov 26 11:38:23 compute-0 romantic_gould[75720]:         "compute-0"
Nov 26 11:38:23 compute-0 romantic_gould[75720]:     ],
Nov 26 11:38:23 compute-0 romantic_gould[75720]:     "quorum_age": 21,
Nov 26 11:38:23 compute-0 romantic_gould[75720]:     "monmap": {
Nov 26 11:38:23 compute-0 romantic_gould[75720]:         "epoch": 1,
Nov 26 11:38:23 compute-0 romantic_gould[75720]:         "min_mon_release_name": "reef",
Nov 26 11:38:23 compute-0 romantic_gould[75720]:         "num_mons": 1
Nov 26 11:38:23 compute-0 romantic_gould[75720]:     },
Nov 26 11:38:23 compute-0 romantic_gould[75720]:     "osdmap": {
Nov 26 11:38:23 compute-0 romantic_gould[75720]:         "epoch": 1,
Nov 26 11:38:23 compute-0 romantic_gould[75720]:         "num_osds": 0,
Nov 26 11:38:23 compute-0 romantic_gould[75720]:         "num_up_osds": 0,
Nov 26 11:38:23 compute-0 romantic_gould[75720]:         "osd_up_since": 0,
Nov 26 11:38:23 compute-0 romantic_gould[75720]:         "num_in_osds": 0,
Nov 26 11:38:23 compute-0 romantic_gould[75720]:         "osd_in_since": 0,
Nov 26 11:38:23 compute-0 romantic_gould[75720]:         "num_remapped_pgs": 0
Nov 26 11:38:23 compute-0 romantic_gould[75720]:     },
Nov 26 11:38:23 compute-0 romantic_gould[75720]:     "pgmap": {
Nov 26 11:38:23 compute-0 romantic_gould[75720]:         "pgs_by_state": [],
Nov 26 11:38:23 compute-0 romantic_gould[75720]:         "num_pgs": 0,
Nov 26 11:38:23 compute-0 romantic_gould[75720]:         "num_pools": 0,
Nov 26 11:38:23 compute-0 romantic_gould[75720]:         "num_objects": 0,
Nov 26 11:38:23 compute-0 romantic_gould[75720]:         "data_bytes": 0,
Nov 26 11:38:23 compute-0 romantic_gould[75720]:         "bytes_used": 0,
Nov 26 11:38:23 compute-0 romantic_gould[75720]:         "bytes_avail": 0,
Nov 26 11:38:23 compute-0 romantic_gould[75720]:         "bytes_total": 0
Nov 26 11:38:23 compute-0 romantic_gould[75720]:     },
Nov 26 11:38:23 compute-0 romantic_gould[75720]:     "fsmap": {
Nov 26 11:38:23 compute-0 romantic_gould[75720]:         "epoch": 1,
Nov 26 11:38:23 compute-0 romantic_gould[75720]:         "by_rank": [],
Nov 26 11:38:23 compute-0 romantic_gould[75720]:         "up:standby": 0
Nov 26 11:38:23 compute-0 romantic_gould[75720]:     },
Nov 26 11:38:23 compute-0 romantic_gould[75720]:     "mgrmap": {
Nov 26 11:38:23 compute-0 romantic_gould[75720]:         "available": true,
Nov 26 11:38:23 compute-0 romantic_gould[75720]:         "num_standbys": 0,
Nov 26 11:38:23 compute-0 romantic_gould[75720]:         "modules": [
Nov 26 11:38:23 compute-0 romantic_gould[75720]:             "iostat",
Nov 26 11:38:23 compute-0 romantic_gould[75720]:             "nfs",
Nov 26 11:38:23 compute-0 romantic_gould[75720]:             "restful"
Nov 26 11:38:23 compute-0 romantic_gould[75720]:         ],
Nov 26 11:38:23 compute-0 romantic_gould[75720]:         "services": {}
Nov 26 11:38:23 compute-0 romantic_gould[75720]:     },
Nov 26 11:38:23 compute-0 romantic_gould[75720]:     "servicemap": {
Nov 26 11:38:23 compute-0 romantic_gould[75720]:         "epoch": 1,
Nov 26 11:38:23 compute-0 romantic_gould[75720]:         "modified": "2025-11-26T11:37:59.453002+0000",
Nov 26 11:38:23 compute-0 romantic_gould[75720]:         "services": {}
Nov 26 11:38:23 compute-0 romantic_gould[75720]:     },
Nov 26 11:38:23 compute-0 romantic_gould[75720]:     "progress_events": {}
Nov 26 11:38:23 compute-0 romantic_gould[75720]: }
Nov 26 11:38:23 compute-0 systemd[1]: libpod-dfe3bd36dfbe01535efba820cad5831af9ca85c6fa075a1bd1482ad0e2fee16f.scope: Deactivated successfully.
Nov 26 11:38:23 compute-0 podman[75746]: 2025-11-26 11:38:23.247432374 +0000 UTC m=+0.015047206 container died dfe3bd36dfbe01535efba820cad5831af9ca85c6fa075a1bd1482ad0e2fee16f (image=quay.io/ceph/ceph:v18, name=romantic_gould, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 26 11:38:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-d89815b16dc3d0c31567d0b5801f68b03fa33676f1a481d27e511410cc657cf6-merged.mount: Deactivated successfully.
Nov 26 11:38:23 compute-0 podman[75746]: 2025-11-26 11:38:23.266990738 +0000 UTC m=+0.034605570 container remove dfe3bd36dfbe01535efba820cad5831af9ca85c6fa075a1bd1482ad0e2fee16f (image=quay.io/ceph/ceph:v18, name=romantic_gould, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 26 11:38:23 compute-0 systemd[1]: libpod-conmon-dfe3bd36dfbe01535efba820cad5831af9ca85c6fa075a1bd1482ad0e2fee16f.scope: Deactivated successfully.
Nov 26 11:38:23 compute-0 podman[75758]: 2025-11-26 11:38:23.310182373 +0000 UTC m=+0.026614876 container create a8e308c880587c426213fa9772a36cda9866916846faebf8582e1845babd8c2c (image=quay.io/ceph/ceph:v18, name=competent_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:38:23 compute-0 systemd[1]: Started libpod-conmon-a8e308c880587c426213fa9772a36cda9866916846faebf8582e1845babd8c2c.scope.
Nov 26 11:38:23 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:38:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/274fbc807ea8101926ce68b72d8ec0bfca75e5f148f54f8808eb7a4b915a15f1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/274fbc807ea8101926ce68b72d8ec0bfca75e5f148f54f8808eb7a4b915a15f1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/274fbc807ea8101926ce68b72d8ec0bfca75e5f148f54f8808eb7a4b915a15f1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/274fbc807ea8101926ce68b72d8ec0bfca75e5f148f54f8808eb7a4b915a15f1/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:23 compute-0 podman[75758]: 2025-11-26 11:38:23.359676538 +0000 UTC m=+0.076109060 container init a8e308c880587c426213fa9772a36cda9866916846faebf8582e1845babd8c2c (image=quay.io/ceph/ceph:v18, name=competent_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:38:23 compute-0 podman[75758]: 2025-11-26 11:38:23.364706456 +0000 UTC m=+0.081138960 container start a8e308c880587c426213fa9772a36cda9866916846faebf8582e1845babd8c2c (image=quay.io/ceph/ceph:v18, name=competent_williamson, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:38:23 compute-0 podman[75758]: 2025-11-26 11:38:23.366106547 +0000 UTC m=+0.082539081 container attach a8e308c880587c426213fa9772a36cda9866916846faebf8582e1845babd8c2c (image=quay.io/ceph/ceph:v18, name=competent_williamson, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:38:23 compute-0 podman[75758]: 2025-11-26 11:38:23.298897406 +0000 UTC m=+0.015329929 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:38:23 compute-0 ceph-mgr[75197]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 26 11:38:23 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1393475235' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 26 11:38:23 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Nov 26 11:38:23 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2407762542' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 26 11:38:23 compute-0 systemd[1]: libpod-a8e308c880587c426213fa9772a36cda9866916846faebf8582e1845babd8c2c.scope: Deactivated successfully.
Nov 26 11:38:23 compute-0 podman[75798]: 2025-11-26 11:38:23.810333972 +0000 UTC m=+0.015438083 container died a8e308c880587c426213fa9772a36cda9866916846faebf8582e1845babd8c2c (image=quay.io/ceph/ceph:v18, name=competent_williamson, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 26 11:38:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-274fbc807ea8101926ce68b72d8ec0bfca75e5f148f54f8808eb7a4b915a15f1-merged.mount: Deactivated successfully.
Nov 26 11:38:23 compute-0 podman[75798]: 2025-11-26 11:38:23.832717426 +0000 UTC m=+0.037821517 container remove a8e308c880587c426213fa9772a36cda9866916846faebf8582e1845babd8c2c (image=quay.io/ceph/ceph:v18, name=competent_williamson, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 26 11:38:23 compute-0 systemd[1]: libpod-conmon-a8e308c880587c426213fa9772a36cda9866916846faebf8582e1845babd8c2c.scope: Deactivated successfully.
Nov 26 11:38:23 compute-0 podman[75810]: 2025-11-26 11:38:23.875516271 +0000 UTC m=+0.026411433 container create 34225d8946e1ff8c9a1a26d19cc812379155c5314d4eeaef07d2e70531949c5d (image=quay.io/ceph/ceph:v18, name=fervent_liskov, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:38:23 compute-0 systemd[1]: Started libpod-conmon-34225d8946e1ff8c9a1a26d19cc812379155c5314d4eeaef07d2e70531949c5d.scope.
Nov 26 11:38:23 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:38:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbe905615e5b3ae4dd7469baf899304354a66f8c56190205658434a76c2ec6ea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbe905615e5b3ae4dd7469baf899304354a66f8c56190205658434a76c2ec6ea/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbe905615e5b3ae4dd7469baf899304354a66f8c56190205658434a76c2ec6ea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:23 compute-0 podman[75810]: 2025-11-26 11:38:23.912031632 +0000 UTC m=+0.062926814 container init 34225d8946e1ff8c9a1a26d19cc812379155c5314d4eeaef07d2e70531949c5d (image=quay.io/ceph/ceph:v18, name=fervent_liskov, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:38:23 compute-0 podman[75810]: 2025-11-26 11:38:23.915833474 +0000 UTC m=+0.066728636 container start 34225d8946e1ff8c9a1a26d19cc812379155c5314d4eeaef07d2e70531949c5d (image=quay.io/ceph/ceph:v18, name=fervent_liskov, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:38:23 compute-0 podman[75810]: 2025-11-26 11:38:23.916925534 +0000 UTC m=+0.067820706 container attach 34225d8946e1ff8c9a1a26d19cc812379155c5314d4eeaef07d2e70531949c5d (image=quay.io/ceph/ceph:v18, name=fervent_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True)
Nov 26 11:38:23 compute-0 podman[75810]: 2025-11-26 11:38:23.864746404 +0000 UTC m=+0.015641586 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:38:24 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0) v1
Nov 26 11:38:24 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2925637711' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Nov 26 11:38:24 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2925637711' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Nov 26 11:38:24 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.mwrktr(active, since 5s)
Nov 26 11:38:24 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/2407762542' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 26 11:38:24 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/2925637711' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Nov 26 11:38:24 compute-0 systemd[1]: libpod-34225d8946e1ff8c9a1a26d19cc812379155c5314d4eeaef07d2e70531949c5d.scope: Deactivated successfully.
Nov 26 11:38:24 compute-0 podman[75810]: 2025-11-26 11:38:24.699778271 +0000 UTC m=+0.850673443 container died 34225d8946e1ff8c9a1a26d19cc812379155c5314d4eeaef07d2e70531949c5d (image=quay.io/ceph/ceph:v18, name=fervent_liskov, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 26 11:38:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-fbe905615e5b3ae4dd7469baf899304354a66f8c56190205658434a76c2ec6ea-merged.mount: Deactivated successfully.
Nov 26 11:38:24 compute-0 podman[75810]: 2025-11-26 11:38:24.721472114 +0000 UTC m=+0.872367276 container remove 34225d8946e1ff8c9a1a26d19cc812379155c5314d4eeaef07d2e70531949c5d (image=quay.io/ceph/ceph:v18, name=fervent_liskov, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Nov 26 11:38:24 compute-0 systemd[1]: libpod-conmon-34225d8946e1ff8c9a1a26d19cc812379155c5314d4eeaef07d2e70531949c5d.scope: Deactivated successfully.
Nov 26 11:38:24 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-mwrktr[75193]: ignoring --setuser ceph since I am not root
Nov 26 11:38:24 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-mwrktr[75193]: ignoring --setgroup ceph since I am not root
Nov 26 11:38:24 compute-0 ceph-mgr[75197]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 26 11:38:24 compute-0 ceph-mgr[75197]: pidfile_write: ignore empty --pid-file
Nov 26 11:38:24 compute-0 podman[75858]: 2025-11-26 11:38:24.766139544 +0000 UTC m=+0.031388382 container create f6fb2e8b621aa2dc28a57400d077fe633604f89d73b43a4bd351c4dafa9d6d20 (image=quay.io/ceph/ceph:v18, name=unruffled_kepler, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 26 11:38:24 compute-0 systemd[1]: Started libpod-conmon-f6fb2e8b621aa2dc28a57400d077fe633604f89d73b43a4bd351c4dafa9d6d20.scope.
Nov 26 11:38:24 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:38:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09b17e5c1d693fedd3c0028711e3f023f35f3137ebc7817c70040c41c949d633/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09b17e5c1d693fedd3c0028711e3f023f35f3137ebc7817c70040c41c949d633/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09b17e5c1d693fedd3c0028711e3f023f35f3137ebc7817c70040c41c949d633/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:24 compute-0 podman[75858]: 2025-11-26 11:38:24.822608464 +0000 UTC m=+0.087857313 container init f6fb2e8b621aa2dc28a57400d077fe633604f89d73b43a4bd351c4dafa9d6d20 (image=quay.io/ceph/ceph:v18, name=unruffled_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 26 11:38:24 compute-0 podman[75858]: 2025-11-26 11:38:24.826479006 +0000 UTC m=+0.091727844 container start f6fb2e8b621aa2dc28a57400d077fe633604f89d73b43a4bd351c4dafa9d6d20 (image=quay.io/ceph/ceph:v18, name=unruffled_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 26 11:38:24 compute-0 podman[75858]: 2025-11-26 11:38:24.827645246 +0000 UTC m=+0.092894084 container attach f6fb2e8b621aa2dc28a57400d077fe633604f89d73b43a4bd351c4dafa9d6d20 (image=quay.io/ceph/ceph:v18, name=unruffled_kepler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:38:24 compute-0 ceph-mgr[75197]: mgr[py] Loading python module 'alerts'
Nov 26 11:38:24 compute-0 podman[75858]: 2025-11-26 11:38:24.755603829 +0000 UTC m=+0.020852688 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:38:25 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-mwrktr[75193]: 2025-11-26T11:38:25.102+0000 7fc9f6eef140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 26 11:38:25 compute-0 ceph-mgr[75197]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 26 11:38:25 compute-0 ceph-mgr[75197]: mgr[py] Loading python module 'balancer'
Nov 26 11:38:25 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 26 11:38:25 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2248352023' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 26 11:38:25 compute-0 unruffled_kepler[75896]: {
Nov 26 11:38:25 compute-0 unruffled_kepler[75896]:     "epoch": 5,
Nov 26 11:38:25 compute-0 unruffled_kepler[75896]:     "available": true,
Nov 26 11:38:25 compute-0 unruffled_kepler[75896]:     "active_name": "compute-0.mwrktr",
Nov 26 11:38:25 compute-0 unruffled_kepler[75896]:     "num_standby": 0
Nov 26 11:38:25 compute-0 unruffled_kepler[75896]: }
Nov 26 11:38:25 compute-0 systemd[1]: libpod-f6fb2e8b621aa2dc28a57400d077fe633604f89d73b43a4bd351c4dafa9d6d20.scope: Deactivated successfully.
Nov 26 11:38:25 compute-0 conmon[75896]: conmon f6fb2e8b621aa2dc28a5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f6fb2e8b621aa2dc28a57400d077fe633604f89d73b43a4bd351c4dafa9d6d20.scope/container/memory.events
Nov 26 11:38:25 compute-0 podman[75922]: 2025-11-26 11:38:25.320834351 +0000 UTC m=+0.016523721 container died f6fb2e8b621aa2dc28a57400d077fe633604f89d73b43a4bd351c4dafa9d6d20 (image=quay.io/ceph/ceph:v18, name=unruffled_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:38:25 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-mwrktr[75193]: 2025-11-26T11:38:25.321+0000 7fc9f6eef140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 26 11:38:25 compute-0 ceph-mgr[75197]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 26 11:38:25 compute-0 ceph-mgr[75197]: mgr[py] Loading python module 'cephadm'
Nov 26 11:38:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-09b17e5c1d693fedd3c0028711e3f023f35f3137ebc7817c70040c41c949d633-merged.mount: Deactivated successfully.
Nov 26 11:38:25 compute-0 podman[75922]: 2025-11-26 11:38:25.341660667 +0000 UTC m=+0.037350027 container remove f6fb2e8b621aa2dc28a57400d077fe633604f89d73b43a4bd351c4dafa9d6d20 (image=quay.io/ceph/ceph:v18, name=unruffled_kepler, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:38:25 compute-0 systemd[1]: libpod-conmon-f6fb2e8b621aa2dc28a57400d077fe633604f89d73b43a4bd351c4dafa9d6d20.scope: Deactivated successfully.
Nov 26 11:38:25 compute-0 podman[75934]: 2025-11-26 11:38:25.383401475 +0000 UTC m=+0.025202071 container create 64b37b8afc348bbed54c5df4925b30c19cd1439c0d50e453e65f0d25270cdb70 (image=quay.io/ceph/ceph:v18, name=ecstatic_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:38:25 compute-0 systemd[1]: Started libpod-conmon-64b37b8afc348bbed54c5df4925b30c19cd1439c0d50e453e65f0d25270cdb70.scope.
Nov 26 11:38:25 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:38:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f308b0b04c1e40b11547219a36e08219e2c4f8bcffa49e741340af4fbdf9394/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f308b0b04c1e40b11547219a36e08219e2c4f8bcffa49e741340af4fbdf9394/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f308b0b04c1e40b11547219a36e08219e2c4f8bcffa49e741340af4fbdf9394/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:25 compute-0 podman[75934]: 2025-11-26 11:38:25.431179272 +0000 UTC m=+0.072979878 container init 64b37b8afc348bbed54c5df4925b30c19cd1439c0d50e453e65f0d25270cdb70 (image=quay.io/ceph/ceph:v18, name=ecstatic_buck, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:38:25 compute-0 podman[75934]: 2025-11-26 11:38:25.435345571 +0000 UTC m=+0.077146178 container start 64b37b8afc348bbed54c5df4925b30c19cd1439c0d50e453e65f0d25270cdb70 (image=quay.io/ceph/ceph:v18, name=ecstatic_buck, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 26 11:38:25 compute-0 podman[75934]: 2025-11-26 11:38:25.437912814 +0000 UTC m=+0.079713410 container attach 64b37b8afc348bbed54c5df4925b30c19cd1439c0d50e453e65f0d25270cdb70 (image=quay.io/ceph/ceph:v18, name=ecstatic_buck, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 26 11:38:25 compute-0 podman[75934]: 2025-11-26 11:38:25.373525085 +0000 UTC m=+0.015325702 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:38:25 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/2925637711' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Nov 26 11:38:25 compute-0 ceph-mon[74928]: mgrmap e5: compute-0.mwrktr(active, since 5s)
Nov 26 11:38:25 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/2248352023' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 26 11:38:26 compute-0 ceph-mgr[75197]: mgr[py] Loading python module 'crash'
Nov 26 11:38:27 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-mwrktr[75193]: 2025-11-26T11:38:27.173+0000 7fc9f6eef140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 26 11:38:27 compute-0 ceph-mgr[75197]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 26 11:38:27 compute-0 ceph-mgr[75197]: mgr[py] Loading python module 'dashboard'
Nov 26 11:38:28 compute-0 ceph-mgr[75197]: mgr[py] Loading python module 'devicehealth'
Nov 26 11:38:28 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-mwrktr[75193]: 2025-11-26T11:38:28.605+0000 7fc9f6eef140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 26 11:38:28 compute-0 ceph-mgr[75197]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 26 11:38:28 compute-0 ceph-mgr[75197]: mgr[py] Loading python module 'diskprediction_local'
Nov 26 11:38:29 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-mwrktr[75193]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 26 11:38:29 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-mwrktr[75193]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 26 11:38:29 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-mwrktr[75193]:   from numpy import show_config as show_numpy_config
Nov 26 11:38:29 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-mwrktr[75193]: 2025-11-26T11:38:29.060+0000 7fc9f6eef140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 26 11:38:29 compute-0 ceph-mgr[75197]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 26 11:38:29 compute-0 ceph-mgr[75197]: mgr[py] Loading python module 'influx'
Nov 26 11:38:29 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-mwrktr[75193]: 2025-11-26T11:38:29.268+0000 7fc9f6eef140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 26 11:38:29 compute-0 ceph-mgr[75197]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 26 11:38:29 compute-0 ceph-mgr[75197]: mgr[py] Loading python module 'insights'
Nov 26 11:38:29 compute-0 ceph-mgr[75197]: mgr[py] Loading python module 'iostat'
Nov 26 11:38:29 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-mwrktr[75193]: 2025-11-26T11:38:29.679+0000 7fc9f6eef140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 26 11:38:29 compute-0 ceph-mgr[75197]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 26 11:38:29 compute-0 ceph-mgr[75197]: mgr[py] Loading python module 'k8sevents'
Nov 26 11:38:31 compute-0 ceph-mgr[75197]: mgr[py] Loading python module 'localpool'
Nov 26 11:38:31 compute-0 ceph-mgr[75197]: mgr[py] Loading python module 'mds_autoscaler'
Nov 26 11:38:31 compute-0 ceph-mgr[75197]: mgr[py] Loading python module 'mirroring'
Nov 26 11:38:32 compute-0 ceph-mgr[75197]: mgr[py] Loading python module 'nfs'
Nov 26 11:38:32 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-mwrktr[75193]: 2025-11-26T11:38:32.764+0000 7fc9f6eef140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 26 11:38:32 compute-0 ceph-mgr[75197]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 26 11:38:32 compute-0 ceph-mgr[75197]: mgr[py] Loading python module 'orchestrator'
Nov 26 11:38:33 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-mwrktr[75193]: 2025-11-26T11:38:33.340+0000 7fc9f6eef140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 26 11:38:33 compute-0 ceph-mgr[75197]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 26 11:38:33 compute-0 ceph-mgr[75197]: mgr[py] Loading python module 'osd_perf_query'
Nov 26 11:38:33 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-mwrktr[75193]: 2025-11-26T11:38:33.573+0000 7fc9f6eef140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 26 11:38:33 compute-0 ceph-mgr[75197]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 26 11:38:33 compute-0 ceph-mgr[75197]: mgr[py] Loading python module 'osd_support'
Nov 26 11:38:33 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-mwrktr[75193]: 2025-11-26T11:38:33.777+0000 7fc9f6eef140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 26 11:38:33 compute-0 ceph-mgr[75197]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 26 11:38:33 compute-0 ceph-mgr[75197]: mgr[py] Loading python module 'pg_autoscaler'
Nov 26 11:38:34 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-mwrktr[75193]: 2025-11-26T11:38:34.013+0000 7fc9f6eef140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 26 11:38:34 compute-0 ceph-mgr[75197]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 26 11:38:34 compute-0 ceph-mgr[75197]: mgr[py] Loading python module 'progress'
Nov 26 11:38:34 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-mwrktr[75193]: 2025-11-26T11:38:34.223+0000 7fc9f6eef140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 26 11:38:34 compute-0 ceph-mgr[75197]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 26 11:38:34 compute-0 ceph-mgr[75197]: mgr[py] Loading python module 'prometheus'
Nov 26 11:38:35 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-mwrktr[75193]: 2025-11-26T11:38:35.086+0000 7fc9f6eef140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 26 11:38:35 compute-0 ceph-mgr[75197]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 26 11:38:35 compute-0 ceph-mgr[75197]: mgr[py] Loading python module 'rbd_support'
Nov 26 11:38:35 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-mwrktr[75193]: 2025-11-26T11:38:35.346+0000 7fc9f6eef140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 26 11:38:35 compute-0 ceph-mgr[75197]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 26 11:38:35 compute-0 ceph-mgr[75197]: mgr[py] Loading python module 'restful'
Nov 26 11:38:35 compute-0 ceph-mgr[75197]: mgr[py] Loading python module 'rgw'
Nov 26 11:38:36 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-mwrktr[75193]: 2025-11-26T11:38:36.574+0000 7fc9f6eef140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 26 11:38:36 compute-0 ceph-mgr[75197]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 26 11:38:36 compute-0 ceph-mgr[75197]: mgr[py] Loading python module 'rook'
Nov 26 11:38:38 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-mwrktr[75193]: 2025-11-26T11:38:38.357+0000 7fc9f6eef140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 26 11:38:38 compute-0 ceph-mgr[75197]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 26 11:38:38 compute-0 ceph-mgr[75197]: mgr[py] Loading python module 'selftest'
Nov 26 11:38:38 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-mwrktr[75193]: 2025-11-26T11:38:38.575+0000 7fc9f6eef140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 26 11:38:38 compute-0 ceph-mgr[75197]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 26 11:38:38 compute-0 ceph-mgr[75197]: mgr[py] Loading python module 'snap_schedule'
Nov 26 11:38:38 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-mwrktr[75193]: 2025-11-26T11:38:38.792+0000 7fc9f6eef140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 26 11:38:38 compute-0 ceph-mgr[75197]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 26 11:38:38 compute-0 ceph-mgr[75197]: mgr[py] Loading python module 'stats'
Nov 26 11:38:39 compute-0 ceph-mgr[75197]: mgr[py] Loading python module 'status'
Nov 26 11:38:39 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-mwrktr[75193]: 2025-11-26T11:38:39.233+0000 7fc9f6eef140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 26 11:38:39 compute-0 ceph-mgr[75197]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 26 11:38:39 compute-0 ceph-mgr[75197]: mgr[py] Loading python module 'telegraf'
Nov 26 11:38:39 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-mwrktr[75193]: 2025-11-26T11:38:39.440+0000 7fc9f6eef140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 26 11:38:39 compute-0 ceph-mgr[75197]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 26 11:38:39 compute-0 ceph-mgr[75197]: mgr[py] Loading python module 'telemetry'
Nov 26 11:38:39 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-mwrktr[75193]: 2025-11-26T11:38:39.955+0000 7fc9f6eef140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 26 11:38:39 compute-0 ceph-mgr[75197]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 26 11:38:39 compute-0 ceph-mgr[75197]: mgr[py] Loading python module 'test_orchestrator'
Nov 26 11:38:40 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-mwrktr[75193]: 2025-11-26T11:38:40.535+0000 7fc9f6eef140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 26 11:38:40 compute-0 ceph-mgr[75197]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 26 11:38:40 compute-0 ceph-mgr[75197]: mgr[py] Loading python module 'volumes'
Nov 26 11:38:41 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-mwrktr[75193]: 2025-11-26T11:38:41.147+0000 7fc9f6eef140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 26 11:38:41 compute-0 ceph-mgr[75197]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 26 11:38:41 compute-0 ceph-mgr[75197]: mgr[py] Loading python module 'zabbix'
Nov 26 11:38:41 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-mwrktr[75193]: 2025-11-26T11:38:41.354+0000 7fc9f6eef140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 26 11:38:41 compute-0 ceph-mgr[75197]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 26 11:38:41 compute-0 ceph-mon[74928]: log_channel(cluster) log [INF] : Active manager daemon compute-0.mwrktr restarted
Nov 26 11:38:41 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Nov 26 11:38:41 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 26 11:38:41 compute-0 ceph-mgr[75197]: ms_deliver_dispatch: unhandled message 0x5608766891e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Nov 26 11:38:41 compute-0 ceph-mon[74928]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.mwrktr
Nov 26 11:38:41 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Nov 26 11:38:41 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Nov 26 11:38:41 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Nov 26 11:38:41 compute-0 ceph-mgr[75197]: mgr handle_mgr_map Activating!
Nov 26 11:38:41 compute-0 ceph-mgr[75197]: mgr handle_mgr_map I am now activating
Nov 26 11:38:41 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Nov 26 11:38:41 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.mwrktr(active, starting, since 0.00888111s)
Nov 26 11:38:41 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 26 11:38:41 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 26 11:38:41 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.mwrktr", "id": "compute-0.mwrktr"} v 0) v1
Nov 26 11:38:41 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "mgr metadata", "who": "compute-0.mwrktr", "id": "compute-0.mwrktr"}]: dispatch
Nov 26 11:38:41 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Nov 26 11:38:41 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 26 11:38:41 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).mds e1 all = 1
Nov 26 11:38:41 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 26 11:38:41 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 26 11:38:41 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Nov 26 11:38:41 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 26 11:38:41 compute-0 ceph-mgr[75197]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 11:38:41 compute-0 ceph-mgr[75197]: mgr load Constructed class from module: balancer
Nov 26 11:38:41 compute-0 ceph-mgr[75197]: [balancer INFO root] Starting
Nov 26 11:38:41 compute-0 ceph-mon[74928]: log_channel(cluster) log [INF] : Manager daemon compute-0.mwrktr is now available
Nov 26 11:38:41 compute-0 ceph-mgr[75197]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 11:38:41 compute-0 ceph-mgr[75197]: [balancer INFO root] Optimize plan auto_2025-11-26_11:38:41
Nov 26 11:38:41 compute-0 ceph-mgr[75197]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 11:38:41 compute-0 ceph-mgr[75197]: [balancer INFO root] do_upmap
Nov 26 11:38:41 compute-0 ceph-mgr[75197]: [balancer INFO root] No pools available
Nov 26 11:38:41 compute-0 ceph-mgr[75197]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Nov 26 11:38:41 compute-0 ceph-mgr[75197]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Nov 26 11:38:41 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0) v1
Nov 26 11:38:41 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:38:41 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0) v1
Nov 26 11:38:41 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:38:41 compute-0 ceph-mgr[75197]: mgr load Constructed class from module: cephadm
Nov 26 11:38:41 compute-0 ceph-mgr[75197]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 11:38:41 compute-0 ceph-mgr[75197]: mgr load Constructed class from module: crash
Nov 26 11:38:41 compute-0 ceph-mgr[75197]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 11:38:41 compute-0 ceph-mgr[75197]: mgr load Constructed class from module: devicehealth
Nov 26 11:38:41 compute-0 ceph-mgr[75197]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 11:38:41 compute-0 ceph-mgr[75197]: mgr load Constructed class from module: iostat
Nov 26 11:38:41 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 26 11:38:41 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 26 11:38:41 compute-0 ceph-mgr[75197]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 11:38:41 compute-0 ceph-mgr[75197]: mgr load Constructed class from module: nfs
Nov 26 11:38:41 compute-0 ceph-mgr[75197]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 11:38:41 compute-0 ceph-mgr[75197]: mgr load Constructed class from module: orchestrator
Nov 26 11:38:41 compute-0 ceph-mgr[75197]: [devicehealth INFO root] Starting
Nov 26 11:38:41 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 26 11:38:41 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 26 11:38:41 compute-0 ceph-mgr[75197]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 11:38:41 compute-0 ceph-mgr[75197]: mgr load Constructed class from module: pg_autoscaler
Nov 26 11:38:41 compute-0 ceph-mgr[75197]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 11:38:41 compute-0 ceph-mgr[75197]: mgr load Constructed class from module: progress
Nov 26 11:38:41 compute-0 ceph-mgr[75197]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 11:38:41 compute-0 ceph-mgr[75197]: [progress INFO root] Loading...
Nov 26 11:38:41 compute-0 ceph-mgr[75197]: [progress INFO root] No stored events to load
Nov 26 11:38:41 compute-0 ceph-mgr[75197]: [progress INFO root] Loaded [] historic events
Nov 26 11:38:41 compute-0 ceph-mgr[75197]: [progress INFO root] Loaded OSDMap, ready.
Nov 26 11:38:41 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 11:38:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] recovery thread starting
Nov 26 11:38:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] starting setup
Nov 26 11:38:41 compute-0 ceph-mgr[75197]: mgr load Constructed class from module: rbd_support
Nov 26 11:38:41 compute-0 ceph-mgr[75197]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 11:38:41 compute-0 ceph-mgr[75197]: mgr load Constructed class from module: restful
Nov 26 11:38:41 compute-0 ceph-mon[74928]: Active manager daemon compute-0.mwrktr restarted
Nov 26 11:38:41 compute-0 ceph-mon[74928]: Activating manager daemon compute-0.mwrktr
Nov 26 11:38:41 compute-0 ceph-mon[74928]: osdmap e2: 0 total, 0 up, 0 in
Nov 26 11:38:41 compute-0 ceph-mon[74928]: mgrmap e6: compute-0.mwrktr(active, starting, since 0.00888111s)
Nov 26 11:38:41 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 26 11:38:41 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "mgr metadata", "who": "compute-0.mwrktr", "id": "compute-0.mwrktr"}]: dispatch
Nov 26 11:38:41 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 26 11:38:41 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 26 11:38:41 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 26 11:38:41 compute-0 ceph-mon[74928]: Manager daemon compute-0.mwrktr is now available
Nov 26 11:38:41 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:38:41 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:38:41 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 26 11:38:41 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 26 11:38:41 compute-0 ceph-mgr[75197]: [restful INFO root] server_addr: :: server_port: 8003
Nov 26 11:38:41 compute-0 ceph-mgr[75197]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 11:38:41 compute-0 ceph-mgr[75197]: mgr load Constructed class from module: status
Nov 26 11:38:41 compute-0 ceph-mgr[75197]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 11:38:41 compute-0 ceph-mgr[75197]: [restful WARNING root] server not running: no certificate configured
Nov 26 11:38:41 compute-0 ceph-mgr[75197]: mgr load Constructed class from module: telemetry
Nov 26 11:38:41 compute-0 ceph-mgr[75197]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 11:38:41 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mwrktr/mirror_snapshot_schedule"} v 0) v1
Nov 26 11:38:41 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mwrktr/mirror_snapshot_schedule"}]: dispatch
Nov 26 11:38:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 11:38:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Nov 26 11:38:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] PerfHandler: starting
Nov 26 11:38:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] TaskHandler: starting
Nov 26 11:38:41 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mwrktr/trash_purge_schedule"} v 0) v1
Nov 26 11:38:41 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mwrktr/trash_purge_schedule"}]: dispatch
Nov 26 11:38:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 11:38:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Nov 26 11:38:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] setup complete
Nov 26 11:38:41 compute-0 ceph-mgr[75197]: mgr load Constructed class from module: volumes
Nov 26 11:38:41 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019928952 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:38:41 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/cert}] v 0) v1
Nov 26 11:38:41 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:38:41 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/key}] v 0) v1
Nov 26 11:38:41 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:38:42 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.mwrktr(active, since 1.01193s)
Nov 26 11:38:42 compute-0 ceph-mgr[75197]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Nov 26 11:38:42 compute-0 ceph-mgr[75197]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Nov 26 11:38:42 compute-0 ecstatic_buck[75947]: {
Nov 26 11:38:42 compute-0 ecstatic_buck[75947]:     "mgrmap_epoch": 7,
Nov 26 11:38:42 compute-0 ecstatic_buck[75947]:     "initialized": true
Nov 26 11:38:42 compute-0 ecstatic_buck[75947]: }
Nov 26 11:38:42 compute-0 systemd[1]: libpod-64b37b8afc348bbed54c5df4925b30c19cd1439c0d50e453e65f0d25270cdb70.scope: Deactivated successfully.
Nov 26 11:38:42 compute-0 podman[75934]: 2025-11-26 11:38:42.388437463 +0000 UTC m=+17.030238058 container died 64b37b8afc348bbed54c5df4925b30c19cd1439c0d50e453e65f0d25270cdb70 (image=quay.io/ceph/ceph:v18, name=ecstatic_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:38:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f308b0b04c1e40b11547219a36e08219e2c4f8bcffa49e741340af4fbdf9394-merged.mount: Deactivated successfully.
Nov 26 11:38:42 compute-0 podman[75934]: 2025-11-26 11:38:42.414248992 +0000 UTC m=+17.056049588 container remove 64b37b8afc348bbed54c5df4925b30c19cd1439c0d50e453e65f0d25270cdb70 (image=quay.io/ceph/ceph:v18, name=ecstatic_buck, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:38:42 compute-0 ceph-mon[74928]: Found migration_current of "None". Setting to last migration.
Nov 26 11:38:42 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mwrktr/mirror_snapshot_schedule"}]: dispatch
Nov 26 11:38:42 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mwrktr/trash_purge_schedule"}]: dispatch
Nov 26 11:38:42 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:38:42 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:38:42 compute-0 ceph-mon[74928]: mgrmap e7: compute-0.mwrktr(active, since 1.01193s)
Nov 26 11:38:42 compute-0 systemd[1]: libpod-conmon-64b37b8afc348bbed54c5df4925b30c19cd1439c0d50e453e65f0d25270cdb70.scope: Deactivated successfully.
Nov 26 11:38:42 compute-0 podman[76102]: 2025-11-26 11:38:42.459237637 +0000 UTC m=+0.028820107 container create c087594e588efc762103e6b4b674b5cc5adb87a9225811568aebb2215638d66f (image=quay.io/ceph/ceph:v18, name=busy_galois, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 26 11:38:42 compute-0 systemd[1]: Started libpod-conmon-c087594e588efc762103e6b4b674b5cc5adb87a9225811568aebb2215638d66f.scope.
Nov 26 11:38:42 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:38:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ff0fa868292076f388b4a71d390f3f918291dbf854241d93dc811aa2c33c978/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ff0fa868292076f388b4a71d390f3f918291dbf854241d93dc811aa2c33c978/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ff0fa868292076f388b4a71d390f3f918291dbf854241d93dc811aa2c33c978/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:42 compute-0 podman[76102]: 2025-11-26 11:38:42.52701489 +0000 UTC m=+0.096597369 container init c087594e588efc762103e6b4b674b5cc5adb87a9225811568aebb2215638d66f (image=quay.io/ceph/ceph:v18, name=busy_galois, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 26 11:38:42 compute-0 podman[76102]: 2025-11-26 11:38:42.53154212 +0000 UTC m=+0.101124599 container start c087594e588efc762103e6b4b674b5cc5adb87a9225811568aebb2215638d66f (image=quay.io/ceph/ceph:v18, name=busy_galois, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:38:42 compute-0 podman[76102]: 2025-11-26 11:38:42.532723799 +0000 UTC m=+0.102306268 container attach c087594e588efc762103e6b4b674b5cc5adb87a9225811568aebb2215638d66f (image=quay.io/ceph/ceph:v18, name=busy_galois, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True)
Nov 26 11:38:42 compute-0 podman[76102]: 2025-11-26 11:38:42.447920679 +0000 UTC m=+0.017503178 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:38:42 compute-0 ceph-mgr[75197]: [cephadm INFO cherrypy.error] [26/Nov/2025:11:38:42] ENGINE Bus STARTING
Nov 26 11:38:42 compute-0 ceph-mgr[75197]: log_channel(cephadm) log [INF] : [26/Nov/2025:11:38:42] ENGINE Bus STARTING
Nov 26 11:38:42 compute-0 ceph-mgr[75197]: [cephadm INFO cherrypy.error] [26/Nov/2025:11:38:42] ENGINE Serving on http://192.168.122.100:8765
Nov 26 11:38:42 compute-0 ceph-mgr[75197]: log_channel(cephadm) log [INF] : [26/Nov/2025:11:38:42] ENGINE Serving on http://192.168.122.100:8765
Nov 26 11:38:42 compute-0 ceph-mgr[75197]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:38:42 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0) v1
Nov 26 11:38:42 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:38:42 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 26 11:38:42 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 26 11:38:42 compute-0 ceph-mgr[75197]: [cephadm INFO cherrypy.error] [26/Nov/2025:11:38:42] ENGINE Serving on https://192.168.122.100:7150
Nov 26 11:38:42 compute-0 ceph-mgr[75197]: log_channel(cephadm) log [INF] : [26/Nov/2025:11:38:42] ENGINE Serving on https://192.168.122.100:7150
Nov 26 11:38:42 compute-0 ceph-mgr[75197]: [cephadm INFO cherrypy.error] [26/Nov/2025:11:38:42] ENGINE Bus STARTED
Nov 26 11:38:42 compute-0 ceph-mgr[75197]: log_channel(cephadm) log [INF] : [26/Nov/2025:11:38:42] ENGINE Bus STARTED
Nov 26 11:38:42 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 26 11:38:42 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 26 11:38:42 compute-0 ceph-mgr[75197]: [cephadm INFO cherrypy.error] [26/Nov/2025:11:38:42] ENGINE Client ('192.168.122.100', 59754) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 26 11:38:42 compute-0 ceph-mgr[75197]: log_channel(cephadm) log [INF] : [26/Nov/2025:11:38:42] ENGINE Client ('192.168.122.100', 59754) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 26 11:38:42 compute-0 systemd[1]: libpod-c087594e588efc762103e6b4b674b5cc5adb87a9225811568aebb2215638d66f.scope: Deactivated successfully.
Nov 26 11:38:42 compute-0 podman[76102]: 2025-11-26 11:38:42.984222501 +0000 UTC m=+0.553804981 container died c087594e588efc762103e6b4b674b5cc5adb87a9225811568aebb2215638d66f (image=quay.io/ceph/ceph:v18, name=busy_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Nov 26 11:38:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-4ff0fa868292076f388b4a71d390f3f918291dbf854241d93dc811aa2c33c978-merged.mount: Deactivated successfully.
Nov 26 11:38:43 compute-0 podman[76102]: 2025-11-26 11:38:43.004308341 +0000 UTC m=+0.573890820 container remove c087594e588efc762103e6b4b674b5cc5adb87a9225811568aebb2215638d66f (image=quay.io/ceph/ceph:v18, name=busy_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 26 11:38:43 compute-0 systemd[1]: libpod-conmon-c087594e588efc762103e6b4b674b5cc5adb87a9225811568aebb2215638d66f.scope: Deactivated successfully.
Nov 26 11:38:43 compute-0 podman[76177]: 2025-11-26 11:38:43.044280332 +0000 UTC m=+0.025787476 container create 67d4b2c04f4acb6bc528af99339b64a3b0fa05e0cb0a3230439dc35544eeafbb (image=quay.io/ceph/ceph:v18, name=modest_borg, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Nov 26 11:38:43 compute-0 systemd[1]: Started libpod-conmon-67d4b2c04f4acb6bc528af99339b64a3b0fa05e0cb0a3230439dc35544eeafbb.scope.
Nov 26 11:38:43 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:38:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1267a51694d95bbfe463769c260c0df6e31f885124debcdbbe6373629246d0d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1267a51694d95bbfe463769c260c0df6e31f885124debcdbbe6373629246d0d7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1267a51694d95bbfe463769c260c0df6e31f885124debcdbbe6373629246d0d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:43 compute-0 podman[76177]: 2025-11-26 11:38:43.098066222 +0000 UTC m=+0.079573396 container init 67d4b2c04f4acb6bc528af99339b64a3b0fa05e0cb0a3230439dc35544eeafbb (image=quay.io/ceph/ceph:v18, name=modest_borg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:38:43 compute-0 podman[76177]: 2025-11-26 11:38:43.10174414 +0000 UTC m=+0.083251283 container start 67d4b2c04f4acb6bc528af99339b64a3b0fa05e0cb0a3230439dc35544eeafbb (image=quay.io/ceph/ceph:v18, name=modest_borg, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:38:43 compute-0 podman[76177]: 2025-11-26 11:38:43.102890623 +0000 UTC m=+0.084397765 container attach 67d4b2c04f4acb6bc528af99339b64a3b0fa05e0cb0a3230439dc35544eeafbb (image=quay.io/ceph/ceph:v18, name=modest_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 26 11:38:43 compute-0 podman[76177]: 2025-11-26 11:38:43.033869373 +0000 UTC m=+0.015376537 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:38:43 compute-0 ceph-mgr[75197]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 26 11:38:43 compute-0 ceph-mon[74928]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Nov 26 11:38:43 compute-0 ceph-mon[74928]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Nov 26 11:38:43 compute-0 ceph-mon[74928]: [26/Nov/2025:11:38:42] ENGINE Bus STARTING
Nov 26 11:38:43 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:38:43 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 26 11:38:43 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 26 11:38:43 compute-0 ceph-mgr[75197]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:38:43 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0) v1
Nov 26 11:38:43 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:38:43 compute-0 ceph-mgr[75197]: [cephadm INFO root] Set ssh ssh_user
Nov 26 11:38:43 compute-0 ceph-mgr[75197]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Nov 26 11:38:43 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0) v1
Nov 26 11:38:43 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:38:43 compute-0 ceph-mgr[75197]: [cephadm INFO root] Set ssh ssh_config
Nov 26 11:38:43 compute-0 ceph-mgr[75197]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Nov 26 11:38:43 compute-0 ceph-mgr[75197]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Nov 26 11:38:43 compute-0 ceph-mgr[75197]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Nov 26 11:38:43 compute-0 modest_borg[76192]: ssh user set to ceph-admin. sudo will be used
Nov 26 11:38:43 compute-0 systemd[1]: libpod-67d4b2c04f4acb6bc528af99339b64a3b0fa05e0cb0a3230439dc35544eeafbb.scope: Deactivated successfully.
Nov 26 11:38:43 compute-0 conmon[76192]: conmon 67d4b2c04f4acb6bc528 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-67d4b2c04f4acb6bc528af99339b64a3b0fa05e0cb0a3230439dc35544eeafbb.scope/container/memory.events
Nov 26 11:38:43 compute-0 podman[76177]: 2025-11-26 11:38:43.540371964 +0000 UTC m=+0.521879117 container died 67d4b2c04f4acb6bc528af99339b64a3b0fa05e0cb0a3230439dc35544eeafbb (image=quay.io/ceph/ceph:v18, name=modest_borg, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 26 11:38:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-1267a51694d95bbfe463769c260c0df6e31f885124debcdbbe6373629246d0d7-merged.mount: Deactivated successfully.
Nov 26 11:38:43 compute-0 podman[76177]: 2025-11-26 11:38:43.560829795 +0000 UTC m=+0.542336938 container remove 67d4b2c04f4acb6bc528af99339b64a3b0fa05e0cb0a3230439dc35544eeafbb (image=quay.io/ceph/ceph:v18, name=modest_borg, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:38:43 compute-0 systemd[1]: libpod-conmon-67d4b2c04f4acb6bc528af99339b64a3b0fa05e0cb0a3230439dc35544eeafbb.scope: Deactivated successfully.
Nov 26 11:38:43 compute-0 podman[76227]: 2025-11-26 11:38:43.601310115 +0000 UTC m=+0.026227266 container create 04809e73a363ffae55d030724b89db47a3281977b3926c46b50a03bb42c539e1 (image=quay.io/ceph/ceph:v18, name=xenodochial_lichterman, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:38:43 compute-0 systemd[1]: Started libpod-conmon-04809e73a363ffae55d030724b89db47a3281977b3926c46b50a03bb42c539e1.scope.
Nov 26 11:38:43 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:38:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccff0f74b2188cce91124defb69492f3f1a5a353e68911f6a33cd3d3e1110992/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccff0f74b2188cce91124defb69492f3f1a5a353e68911f6a33cd3d3e1110992/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccff0f74b2188cce91124defb69492f3f1a5a353e68911f6a33cd3d3e1110992/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccff0f74b2188cce91124defb69492f3f1a5a353e68911f6a33cd3d3e1110992/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccff0f74b2188cce91124defb69492f3f1a5a353e68911f6a33cd3d3e1110992/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:43 compute-0 podman[76227]: 2025-11-26 11:38:43.651554725 +0000 UTC m=+0.076471885 container init 04809e73a363ffae55d030724b89db47a3281977b3926c46b50a03bb42c539e1 (image=quay.io/ceph/ceph:v18, name=xenodochial_lichterman, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 26 11:38:43 compute-0 podman[76227]: 2025-11-26 11:38:43.656701303 +0000 UTC m=+0.081618453 container start 04809e73a363ffae55d030724b89db47a3281977b3926c46b50a03bb42c539e1 (image=quay.io/ceph/ceph:v18, name=xenodochial_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 26 11:38:43 compute-0 podman[76227]: 2025-11-26 11:38:43.657763667 +0000 UTC m=+0.082680838 container attach 04809e73a363ffae55d030724b89db47a3281977b3926c46b50a03bb42c539e1 (image=quay.io/ceph/ceph:v18, name=xenodochial_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:38:43 compute-0 podman[76227]: 2025-11-26 11:38:43.590454988 +0000 UTC m=+0.015372158 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:38:43 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.mwrktr(active, since 2s)
Nov 26 11:38:44 compute-0 ceph-mgr[75197]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:38:44 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0) v1
Nov 26 11:38:44 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:38:44 compute-0 ceph-mgr[75197]: [cephadm INFO root] Set ssh ssh_identity_key
Nov 26 11:38:44 compute-0 ceph-mgr[75197]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Nov 26 11:38:44 compute-0 ceph-mgr[75197]: [cephadm INFO root] Set ssh private key
Nov 26 11:38:44 compute-0 ceph-mgr[75197]: log_channel(cephadm) log [INF] : Set ssh private key
Nov 26 11:38:44 compute-0 systemd[1]: libpod-04809e73a363ffae55d030724b89db47a3281977b3926c46b50a03bb42c539e1.scope: Deactivated successfully.
Nov 26 11:38:44 compute-0 podman[76267]: 2025-11-26 11:38:44.107517438 +0000 UTC m=+0.013781328 container died 04809e73a363ffae55d030724b89db47a3281977b3926c46b50a03bb42c539e1 (image=quay.io/ceph/ceph:v18, name=xenodochial_lichterman, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:38:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-ccff0f74b2188cce91124defb69492f3f1a5a353e68911f6a33cd3d3e1110992-merged.mount: Deactivated successfully.
Nov 26 11:38:44 compute-0 podman[76267]: 2025-11-26 11:38:44.126457615 +0000 UTC m=+0.032721496 container remove 04809e73a363ffae55d030724b89db47a3281977b3926c46b50a03bb42c539e1 (image=quay.io/ceph/ceph:v18, name=xenodochial_lichterman, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 26 11:38:44 compute-0 systemd[1]: libpod-conmon-04809e73a363ffae55d030724b89db47a3281977b3926c46b50a03bb42c539e1.scope: Deactivated successfully.
Nov 26 11:38:44 compute-0 podman[76279]: 2025-11-26 11:38:44.164919239 +0000 UTC m=+0.023738490 container create 00e7c20f83b0d02158cd25647321c1cf8167eff09182f0daa1cb58cfd5bd7dd1 (image=quay.io/ceph/ceph:v18, name=nervous_faraday, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 26 11:38:44 compute-0 systemd[1]: Started libpod-conmon-00e7c20f83b0d02158cd25647321c1cf8167eff09182f0daa1cb58cfd5bd7dd1.scope.
Nov 26 11:38:44 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:38:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45452762b9032e9d5f783a947490785de773b3402dc33389382ba09a1ef4d07e/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45452762b9032e9d5f783a947490785de773b3402dc33389382ba09a1ef4d07e/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45452762b9032e9d5f783a947490785de773b3402dc33389382ba09a1ef4d07e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45452762b9032e9d5f783a947490785de773b3402dc33389382ba09a1ef4d07e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45452762b9032e9d5f783a947490785de773b3402dc33389382ba09a1ef4d07e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:44 compute-0 podman[76279]: 2025-11-26 11:38:44.226592968 +0000 UTC m=+0.085412240 container init 00e7c20f83b0d02158cd25647321c1cf8167eff09182f0daa1cb58cfd5bd7dd1 (image=quay.io/ceph/ceph:v18, name=nervous_faraday, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True)
Nov 26 11:38:44 compute-0 podman[76279]: 2025-11-26 11:38:44.232930553 +0000 UTC m=+0.091749804 container start 00e7c20f83b0d02158cd25647321c1cf8167eff09182f0daa1cb58cfd5bd7dd1 (image=quay.io/ceph/ceph:v18, name=nervous_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:38:44 compute-0 podman[76279]: 2025-11-26 11:38:44.23396783 +0000 UTC m=+0.092787081 container attach 00e7c20f83b0d02158cd25647321c1cf8167eff09182f0daa1cb58cfd5bd7dd1 (image=quay.io/ceph/ceph:v18, name=nervous_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 26 11:38:44 compute-0 podman[76279]: 2025-11-26 11:38:44.155548993 +0000 UTC m=+0.014368264 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:38:44 compute-0 ceph-mon[74928]: [26/Nov/2025:11:38:42] ENGINE Serving on http://192.168.122.100:8765
Nov 26 11:38:44 compute-0 ceph-mon[74928]: from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:38:44 compute-0 ceph-mon[74928]: [26/Nov/2025:11:38:42] ENGINE Serving on https://192.168.122.100:7150
Nov 26 11:38:44 compute-0 ceph-mon[74928]: [26/Nov/2025:11:38:42] ENGINE Bus STARTED
Nov 26 11:38:44 compute-0 ceph-mon[74928]: [26/Nov/2025:11:38:42] ENGINE Client ('192.168.122.100', 59754) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 26 11:38:44 compute-0 ceph-mon[74928]: from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:38:44 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:38:44 compute-0 ceph-mon[74928]: Set ssh ssh_user
Nov 26 11:38:44 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:38:44 compute-0 ceph-mon[74928]: Set ssh ssh_config
Nov 26 11:38:44 compute-0 ceph-mon[74928]: ssh user set to ceph-admin. sudo will be used
Nov 26 11:38:44 compute-0 ceph-mon[74928]: mgrmap e8: compute-0.mwrktr(active, since 2s)
Nov 26 11:38:44 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:38:44 compute-0 ceph-mgr[75197]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:38:44 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0) v1
Nov 26 11:38:44 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:38:44 compute-0 ceph-mgr[75197]: [cephadm INFO root] Set ssh ssh_identity_pub
Nov 26 11:38:44 compute-0 ceph-mgr[75197]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Nov 26 11:38:44 compute-0 systemd[1]: libpod-00e7c20f83b0d02158cd25647321c1cf8167eff09182f0daa1cb58cfd5bd7dd1.scope: Deactivated successfully.
Nov 26 11:38:44 compute-0 podman[76318]: 2025-11-26 11:38:44.686941624 +0000 UTC m=+0.016879992 container died 00e7c20f83b0d02158cd25647321c1cf8167eff09182f0daa1cb58cfd5bd7dd1 (image=quay.io/ceph/ceph:v18, name=nervous_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 26 11:38:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-45452762b9032e9d5f783a947490785de773b3402dc33389382ba09a1ef4d07e-merged.mount: Deactivated successfully.
Nov 26 11:38:44 compute-0 podman[76318]: 2025-11-26 11:38:44.706725223 +0000 UTC m=+0.036663601 container remove 00e7c20f83b0d02158cd25647321c1cf8167eff09182f0daa1cb58cfd5bd7dd1 (image=quay.io/ceph/ceph:v18, name=nervous_faraday, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:38:44 compute-0 systemd[1]: libpod-conmon-00e7c20f83b0d02158cd25647321c1cf8167eff09182f0daa1cb58cfd5bd7dd1.scope: Deactivated successfully.
Nov 26 11:38:44 compute-0 podman[76329]: 2025-11-26 11:38:44.749132489 +0000 UTC m=+0.026037929 container create 16fddb3e321ef07ed6e51a0e317a2e49f1865ba300f1406a83022289e227eeb0 (image=quay.io/ceph/ceph:v18, name=sleepy_curie, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:38:44 compute-0 systemd[1]: Started libpod-conmon-16fddb3e321ef07ed6e51a0e317a2e49f1865ba300f1406a83022289e227eeb0.scope.
Nov 26 11:38:44 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:38:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24010ce7cf2901d98675a84198c1f41a62b22798e313f5c250a421533371f980/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24010ce7cf2901d98675a84198c1f41a62b22798e313f5c250a421533371f980/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24010ce7cf2901d98675a84198c1f41a62b22798e313f5c250a421533371f980/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:44 compute-0 podman[76329]: 2025-11-26 11:38:44.800239225 +0000 UTC m=+0.077144675 container init 16fddb3e321ef07ed6e51a0e317a2e49f1865ba300f1406a83022289e227eeb0 (image=quay.io/ceph/ceph:v18, name=sleepy_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 26 11:38:44 compute-0 podman[76329]: 2025-11-26 11:38:44.804260431 +0000 UTC m=+0.081165870 container start 16fddb3e321ef07ed6e51a0e317a2e49f1865ba300f1406a83022289e227eeb0 (image=quay.io/ceph/ceph:v18, name=sleepy_curie, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:38:44 compute-0 podman[76329]: 2025-11-26 11:38:44.805281206 +0000 UTC m=+0.082186647 container attach 16fddb3e321ef07ed6e51a0e317a2e49f1865ba300f1406a83022289e227eeb0 (image=quay.io/ceph/ceph:v18, name=sleepy_curie, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:38:44 compute-0 podman[76329]: 2025-11-26 11:38:44.738755244 +0000 UTC m=+0.015660704 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:38:45 compute-0 ceph-mgr[75197]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:38:45 compute-0 sleepy_curie[76343]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCvKT/QkMLJ2JJs+mLNPVa567vDwidNCeiDkLDhQyJ/CAnEAeoWv9TDQZjU5Nh1b6tP9cgYTgS4kFxHwiQCcZ4iCzIj5/xj7u0yNzxqgr9G48S/f0GWPXV46n49JNkNn5q3NwjfYF4yyu2aGCfk/3XnCRIegyrsnEjtG5H1YGR3FfWirnkNg0avZxn9z5YDQbNB2IYAEsj5N70h1lobabDV4qaY49B4xxGh4yLS+9B6IlVEppqW22dEU+2LaFacO7ZlMTxGlQBkhnZ+I2IPggjN8/5nunubdI+D0gldVrlszJc+GehbYJwDHLXukJbKlT5dnN7NpFqbQLpHVvoM6U1Fm+3jW7WoYzloPyfccISE22g+seoKu4GX9ZUSjpVNtZP1PkZEDxXobDn5qleNeQBn18+pE9nUKUtIR1byg2NmLZk6ZhM0JDZklljdmIPWdWf55mnjLQHpUwgI2D1pMgp3Wbtu6JfyJ73H7pr0ztaNwBsDeBIG3fePYTlM8wj/cFs= zuul@controller
Nov 26 11:38:45 compute-0 systemd[1]: libpod-16fddb3e321ef07ed6e51a0e317a2e49f1865ba300f1406a83022289e227eeb0.scope: Deactivated successfully.
Nov 26 11:38:45 compute-0 conmon[76343]: conmon 16fddb3e321ef07ed6e5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-16fddb3e321ef07ed6e51a0e317a2e49f1865ba300f1406a83022289e227eeb0.scope/container/memory.events
Nov 26 11:38:45 compute-0 podman[76329]: 2025-11-26 11:38:45.231911567 +0000 UTC m=+0.508817008 container died 16fddb3e321ef07ed6e51a0e317a2e49f1865ba300f1406a83022289e227eeb0 (image=quay.io/ceph/ceph:v18, name=sleepy_curie, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:38:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-24010ce7cf2901d98675a84198c1f41a62b22798e313f5c250a421533371f980-merged.mount: Deactivated successfully.
Nov 26 11:38:45 compute-0 podman[76329]: 2025-11-26 11:38:45.254021474 +0000 UTC m=+0.530926914 container remove 16fddb3e321ef07ed6e51a0e317a2e49f1865ba300f1406a83022289e227eeb0 (image=quay.io/ceph/ceph:v18, name=sleepy_curie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 26 11:38:45 compute-0 systemd[1]: libpod-conmon-16fddb3e321ef07ed6e51a0e317a2e49f1865ba300f1406a83022289e227eeb0.scope: Deactivated successfully.
Nov 26 11:38:45 compute-0 podman[76378]: 2025-11-26 11:38:45.29695837 +0000 UTC m=+0.028065774 container create 793e4282353d95b60407a3f406ab27bc477c9716b0cf223f6f11c720df164d6d (image=quay.io/ceph/ceph:v18, name=cranky_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 26 11:38:45 compute-0 systemd[1]: Started libpod-conmon-793e4282353d95b60407a3f406ab27bc477c9716b0cf223f6f11c720df164d6d.scope.
Nov 26 11:38:45 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:38:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c68bca24a1ee04a89660ca482c11c3eec71b1e90f1616af245e1027b06c8a096/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c68bca24a1ee04a89660ca482c11c3eec71b1e90f1616af245e1027b06c8a096/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c68bca24a1ee04a89660ca482c11c3eec71b1e90f1616af245e1027b06c8a096/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:45 compute-0 podman[76378]: 2025-11-26 11:38:45.349419981 +0000 UTC m=+0.080527405 container init 793e4282353d95b60407a3f406ab27bc477c9716b0cf223f6f11c720df164d6d (image=quay.io/ceph/ceph:v18, name=cranky_sinoussi, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 26 11:38:45 compute-0 podman[76378]: 2025-11-26 11:38:45.353683023 +0000 UTC m=+0.084790417 container start 793e4282353d95b60407a3f406ab27bc477c9716b0cf223f6f11c720df164d6d (image=quay.io/ceph/ceph:v18, name=cranky_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:38:45 compute-0 podman[76378]: 2025-11-26 11:38:45.355006329 +0000 UTC m=+0.086113743 container attach 793e4282353d95b60407a3f406ab27bc477c9716b0cf223f6f11c720df164d6d (image=quay.io/ceph/ceph:v18, name=cranky_sinoussi, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:38:45 compute-0 ceph-mgr[75197]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 26 11:38:45 compute-0 podman[76378]: 2025-11-26 11:38:45.286042028 +0000 UTC m=+0.017149451 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:38:45 compute-0 ceph-mon[74928]: from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:38:45 compute-0 ceph-mon[74928]: Set ssh ssh_identity_key
Nov 26 11:38:45 compute-0 ceph-mon[74928]: Set ssh private key
Nov 26 11:38:45 compute-0 ceph-mon[74928]: from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:38:45 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:38:45 compute-0 ceph-mon[74928]: Set ssh ssh_identity_pub
Nov 26 11:38:45 compute-0 ceph-mgr[75197]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:38:45 compute-0 sshd-session[76418]: Accepted publickey for ceph-admin from 192.168.122.100 port 39518 ssh2: RSA SHA256:UwRHloH7+q4x7CI/eXsFrZa7OprktgY5vDgjNOULMBQ
Nov 26 11:38:45 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Nov 26 11:38:45 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 26 11:38:45 compute-0 systemd-logind[744]: New session 20 of user ceph-admin.
Nov 26 11:38:45 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 26 11:38:45 compute-0 systemd[1]: Starting User Manager for UID 42477...
Nov 26 11:38:45 compute-0 systemd[76422]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 26 11:38:46 compute-0 systemd[76422]: Queued start job for default target Main User Target.
Nov 26 11:38:46 compute-0 systemd[76422]: Created slice User Application Slice.
Nov 26 11:38:46 compute-0 systemd[76422]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 26 11:38:46 compute-0 systemd[76422]: Started Daily Cleanup of User's Temporary Directories.
Nov 26 11:38:46 compute-0 systemd[76422]: Reached target Paths.
Nov 26 11:38:46 compute-0 systemd[76422]: Reached target Timers.
Nov 26 11:38:46 compute-0 systemd[76422]: Starting D-Bus User Message Bus Socket...
Nov 26 11:38:46 compute-0 systemd[76422]: Starting Create User's Volatile Files and Directories...
Nov 26 11:38:46 compute-0 systemd[76422]: Listening on D-Bus User Message Bus Socket.
Nov 26 11:38:46 compute-0 systemd[76422]: Finished Create User's Volatile Files and Directories.
Nov 26 11:38:46 compute-0 systemd[76422]: Reached target Sockets.
Nov 26 11:38:46 compute-0 systemd[76422]: Reached target Basic System.
Nov 26 11:38:46 compute-0 systemd[76422]: Reached target Main User Target.
Nov 26 11:38:46 compute-0 systemd[76422]: Startup finished in 92ms.
Nov 26 11:38:46 compute-0 systemd[1]: Started User Manager for UID 42477.
Nov 26 11:38:46 compute-0 systemd[1]: Started Session 20 of User ceph-admin.
Nov 26 11:38:46 compute-0 sshd-session[76418]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 26 11:38:46 compute-0 sshd-session[76435]: Accepted publickey for ceph-admin from 192.168.122.100 port 39524 ssh2: RSA SHA256:UwRHloH7+q4x7CI/eXsFrZa7OprktgY5vDgjNOULMBQ
Nov 26 11:38:46 compute-0 systemd-logind[744]: New session 22 of user ceph-admin.
Nov 26 11:38:46 compute-0 systemd[1]: Started Session 22 of User ceph-admin.
Nov 26 11:38:46 compute-0 sshd-session[76435]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 26 11:38:46 compute-0 sudo[76442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:38:46 compute-0 sudo[76442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:46 compute-0 sudo[76442]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:46 compute-0 sudo[76467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:38:46 compute-0 sudo[76467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:46 compute-0 sudo[76467]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:46 compute-0 sshd-session[76492]: Accepted publickey for ceph-admin from 192.168.122.100 port 39528 ssh2: RSA SHA256:UwRHloH7+q4x7CI/eXsFrZa7OprktgY5vDgjNOULMBQ
Nov 26 11:38:46 compute-0 systemd-logind[744]: New session 23 of user ceph-admin.
Nov 26 11:38:46 compute-0 systemd[1]: Started Session 23 of User ceph-admin.
Nov 26 11:38:46 compute-0 sshd-session[76492]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 26 11:38:46 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020053132 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:38:46 compute-0 sudo[76496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:38:46 compute-0 sudo[76496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:46 compute-0 sudo[76496]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:46 compute-0 sudo[76521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Nov 26 11:38:46 compute-0 sudo[76521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:46 compute-0 sudo[76521]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:46 compute-0 ceph-mon[74928]: from='client.14152 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:38:46 compute-0 sshd-session[76546]: Accepted publickey for ceph-admin from 192.168.122.100 port 39536 ssh2: RSA SHA256:UwRHloH7+q4x7CI/eXsFrZa7OprktgY5vDgjNOULMBQ
Nov 26 11:38:46 compute-0 systemd-logind[744]: New session 24 of user ceph-admin.
Nov 26 11:38:46 compute-0 systemd[1]: Started Session 24 of User ceph-admin.
Nov 26 11:38:46 compute-0 sshd-session[76546]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 26 11:38:46 compute-0 sudo[76550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:38:46 compute-0 sudo[76550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:46 compute-0 sudo[76550]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:46 compute-0 sudo[76575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d
Nov 26 11:38:46 compute-0 sudo[76575]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:46 compute-0 sudo[76575]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:46 compute-0 ceph-mgr[75197]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Nov 26 11:38:46 compute-0 ceph-mgr[75197]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Nov 26 11:38:47 compute-0 sshd-session[76600]: Accepted publickey for ceph-admin from 192.168.122.100 port 39538 ssh2: RSA SHA256:UwRHloH7+q4x7CI/eXsFrZa7OprktgY5vDgjNOULMBQ
Nov 26 11:38:47 compute-0 systemd-logind[744]: New session 25 of user ceph-admin.
Nov 26 11:38:47 compute-0 systemd[1]: Started Session 25 of User ceph-admin.
Nov 26 11:38:47 compute-0 sshd-session[76600]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 26 11:38:47 compute-0 sudo[76604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:38:47 compute-0 sudo[76604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:47 compute-0 sudo[76604]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:47 compute-0 sudo[76629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7
Nov 26 11:38:47 compute-0 sudo[76629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:47 compute-0 sudo[76629]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:47 compute-0 ceph-mgr[75197]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 26 11:38:47 compute-0 sshd-session[76654]: Accepted publickey for ceph-admin from 192.168.122.100 port 39542 ssh2: RSA SHA256:UwRHloH7+q4x7CI/eXsFrZa7OprktgY5vDgjNOULMBQ
Nov 26 11:38:47 compute-0 systemd-logind[744]: New session 26 of user ceph-admin.
Nov 26 11:38:47 compute-0 systemd[1]: Started Session 26 of User ceph-admin.
Nov 26 11:38:47 compute-0 sshd-session[76654]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 26 11:38:47 compute-0 sudo[76658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:38:47 compute-0 sudo[76658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:47 compute-0 sudo[76658]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:47 compute-0 sudo[76683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-ebab460c-3fd7-5f66-aa87-e10c143123f7/var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7
Nov 26 11:38:47 compute-0 sudo[76683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:47 compute-0 sudo[76683]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:47 compute-0 ceph-mon[74928]: from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:38:47 compute-0 sshd-session[76708]: Accepted publickey for ceph-admin from 192.168.122.100 port 39552 ssh2: RSA SHA256:UwRHloH7+q4x7CI/eXsFrZa7OprktgY5vDgjNOULMBQ
Nov 26 11:38:47 compute-0 systemd-logind[744]: New session 27 of user ceph-admin.
Nov 26 11:38:47 compute-0 systemd[1]: Started Session 27 of User ceph-admin.
Nov 26 11:38:47 compute-0 sshd-session[76708]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 26 11:38:47 compute-0 sudo[76712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:38:47 compute-0 sudo[76712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:47 compute-0 sudo[76712]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:47 compute-0 sudo[76737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-ebab460c-3fd7-5f66-aa87-e10c143123f7/var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new
Nov 26 11:38:47 compute-0 sudo[76737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:47 compute-0 sudo[76737]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:47 compute-0 sshd-session[76762]: Accepted publickey for ceph-admin from 192.168.122.100 port 39554 ssh2: RSA SHA256:UwRHloH7+q4x7CI/eXsFrZa7OprktgY5vDgjNOULMBQ
Nov 26 11:38:47 compute-0 systemd-logind[744]: New session 28 of user ceph-admin.
Nov 26 11:38:48 compute-0 systemd[1]: Started Session 28 of User ceph-admin.
Nov 26 11:38:48 compute-0 sshd-session[76762]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 26 11:38:48 compute-0 sudo[76766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:38:48 compute-0 sudo[76766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:48 compute-0 sudo[76766]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:48 compute-0 sudo[76791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-ebab460c-3fd7-5f66-aa87-e10c143123f7
Nov 26 11:38:48 compute-0 sudo[76791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:48 compute-0 sudo[76791]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:48 compute-0 sshd-session[76816]: Accepted publickey for ceph-admin from 192.168.122.100 port 39564 ssh2: RSA SHA256:UwRHloH7+q4x7CI/eXsFrZa7OprktgY5vDgjNOULMBQ
Nov 26 11:38:48 compute-0 systemd-logind[744]: New session 29 of user ceph-admin.
Nov 26 11:38:48 compute-0 systemd[1]: Started Session 29 of User ceph-admin.
Nov 26 11:38:48 compute-0 sshd-session[76816]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 26 11:38:48 compute-0 sudo[76820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:38:48 compute-0 sudo[76820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:48 compute-0 sudo[76820]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:48 compute-0 sudo[76845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-ebab460c-3fd7-5f66-aa87-e10c143123f7/var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new
Nov 26 11:38:48 compute-0 sudo[76845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:48 compute-0 sudo[76845]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:48 compute-0 sshd-session[76870]: Accepted publickey for ceph-admin from 192.168.122.100 port 39578 ssh2: RSA SHA256:UwRHloH7+q4x7CI/eXsFrZa7OprktgY5vDgjNOULMBQ
Nov 26 11:38:48 compute-0 systemd-logind[744]: New session 30 of user ceph-admin.
Nov 26 11:38:48 compute-0 systemd[1]: Started Session 30 of User ceph-admin.
Nov 26 11:38:48 compute-0 sshd-session[76870]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 26 11:38:48 compute-0 ceph-mon[74928]: Deploying cephadm binary to compute-0
Nov 26 11:38:49 compute-0 sshd-session[76897]: Accepted publickey for ceph-admin from 192.168.122.100 port 39590 ssh2: RSA SHA256:UwRHloH7+q4x7CI/eXsFrZa7OprktgY5vDgjNOULMBQ
Nov 26 11:38:49 compute-0 systemd-logind[744]: New session 31 of user ceph-admin.
Nov 26 11:38:49 compute-0 systemd[1]: Started Session 31 of User ceph-admin.
Nov 26 11:38:49 compute-0 sshd-session[76897]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 26 11:38:49 compute-0 sudo[76901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:38:49 compute-0 sudo[76901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:49 compute-0 sudo[76901]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:49 compute-0 sudo[76926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-ebab460c-3fd7-5f66-aa87-e10c143123f7/var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d
Nov 26 11:38:49 compute-0 sudo[76926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:49 compute-0 sudo[76926]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:49 compute-0 sshd-session[76951]: Accepted publickey for ceph-admin from 192.168.122.100 port 39600 ssh2: RSA SHA256:UwRHloH7+q4x7CI/eXsFrZa7OprktgY5vDgjNOULMBQ
Nov 26 11:38:49 compute-0 systemd-logind[744]: New session 32 of user ceph-admin.
Nov 26 11:38:49 compute-0 ceph-mgr[75197]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 26 11:38:49 compute-0 systemd[1]: Started Session 32 of User ceph-admin.
Nov 26 11:38:49 compute-0 sshd-session[76951]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 26 11:38:49 compute-0 sudo[76955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:38:49 compute-0 sudo[76955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:49 compute-0 sudo[76955]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:49 compute-0 sudo[76980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Nov 26 11:38:49 compute-0 sudo[76980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:49 compute-0 sudo[76980]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:49 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 26 11:38:49 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:38:49 compute-0 ceph-mgr[75197]: [cephadm INFO root] Added host compute-0
Nov 26 11:38:49 compute-0 ceph-mgr[75197]: log_channel(cephadm) log [INF] : Added host compute-0
Nov 26 11:38:49 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 26 11:38:49 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 26 11:38:49 compute-0 cranky_sinoussi[76392]: Added host 'compute-0' with addr '192.168.122.100'
Nov 26 11:38:49 compute-0 systemd[1]: libpod-793e4282353d95b60407a3f406ab27bc477c9716b0cf223f6f11c720df164d6d.scope: Deactivated successfully.
Nov 26 11:38:49 compute-0 podman[76378]: 2025-11-26 11:38:49.68198592 +0000 UTC m=+4.413093324 container died 793e4282353d95b60407a3f406ab27bc477c9716b0cf223f6f11c720df164d6d (image=quay.io/ceph/ceph:v18, name=cranky_sinoussi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:38:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-c68bca24a1ee04a89660ca482c11c3eec71b1e90f1616af245e1027b06c8a096-merged.mount: Deactivated successfully.
Nov 26 11:38:49 compute-0 podman[76378]: 2025-11-26 11:38:49.704307878 +0000 UTC m=+4.435415281 container remove 793e4282353d95b60407a3f406ab27bc477c9716b0cf223f6f11c720df164d6d (image=quay.io/ceph/ceph:v18, name=cranky_sinoussi, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:38:49 compute-0 systemd[1]: libpod-conmon-793e4282353d95b60407a3f406ab27bc477c9716b0cf223f6f11c720df164d6d.scope: Deactivated successfully.
Nov 26 11:38:49 compute-0 sudo[77024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:38:49 compute-0 sudo[77024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:49 compute-0 sudo[77024]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:49 compute-0 podman[77056]: 2025-11-26 11:38:49.74361748 +0000 UTC m=+0.026033430 container create 83a70e766a3c40a6cbfdf66e20e8f12d7f92c9d6e454c27248d4c7db84c202ef (image=quay.io/ceph/ceph:v18, name=hungry_newton, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 26 11:38:49 compute-0 systemd[1]: Started libpod-conmon-83a70e766a3c40a6cbfdf66e20e8f12d7f92c9d6e454c27248d4c7db84c202ef.scope.
Nov 26 11:38:49 compute-0 sudo[77067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:38:49 compute-0 sudo[77067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:49 compute-0 sudo[77067]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:49 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:38:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa94c294a30834446e660b525ddd96e0fd05b24935d188889f202adb5310efec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa94c294a30834446e660b525ddd96e0fd05b24935d188889f202adb5310efec/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa94c294a30834446e660b525ddd96e0fd05b24935d188889f202adb5310efec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:49 compute-0 podman[77056]: 2025-11-26 11:38:49.800853919 +0000 UTC m=+0.083269878 container init 83a70e766a3c40a6cbfdf66e20e8f12d7f92c9d6e454c27248d4c7db84c202ef (image=quay.io/ceph/ceph:v18, name=hungry_newton, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:38:49 compute-0 podman[77056]: 2025-11-26 11:38:49.806156111 +0000 UTC m=+0.088572060 container start 83a70e766a3c40a6cbfdf66e20e8f12d7f92c9d6e454c27248d4c7db84c202ef (image=quay.io/ceph/ceph:v18, name=hungry_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 26 11:38:49 compute-0 podman[77056]: 2025-11-26 11:38:49.807240075 +0000 UTC m=+0.089656025 container attach 83a70e766a3c40a6cbfdf66e20e8f12d7f92c9d6e454c27248d4c7db84c202ef (image=quay.io/ceph/ceph:v18, name=hungry_newton, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:38:49 compute-0 sudo[77099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:38:49 compute-0 sudo[77099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:49 compute-0 sudo[77099]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:49 compute-0 podman[77056]: 2025-11-26 11:38:49.733738495 +0000 UTC m=+0.016154464 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:38:49 compute-0 sudo[77126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph:v18 --timeout 895 inspect-image
Nov 26 11:38:49 compute-0 sudo[77126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:50 compute-0 podman[77171]: 2025-11-26 11:38:50.04093 +0000 UTC m=+0.025099818 container create 66deafc35aaf31326925f2190d3d1173f5745046157af7994f7b93145ef79c7a (image=quay.io/ceph/ceph:v18, name=wonderful_bassi, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:38:50 compute-0 systemd[1]: Started libpod-conmon-66deafc35aaf31326925f2190d3d1173f5745046157af7994f7b93145ef79c7a.scope.
Nov 26 11:38:50 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:38:50 compute-0 podman[77171]: 2025-11-26 11:38:50.095316063 +0000 UTC m=+0.079485880 container init 66deafc35aaf31326925f2190d3d1173f5745046157af7994f7b93145ef79c7a (image=quay.io/ceph/ceph:v18, name=wonderful_bassi, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:38:50 compute-0 podman[77171]: 2025-11-26 11:38:50.099514252 +0000 UTC m=+0.083684070 container start 66deafc35aaf31326925f2190d3d1173f5745046157af7994f7b93145ef79c7a (image=quay.io/ceph/ceph:v18, name=wonderful_bassi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:38:50 compute-0 podman[77171]: 2025-11-26 11:38:50.100587247 +0000 UTC m=+0.084757064 container attach 66deafc35aaf31326925f2190d3d1173f5745046157af7994f7b93145ef79c7a (image=quay.io/ceph/ceph:v18, name=wonderful_bassi, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 11:38:50 compute-0 podman[77171]: 2025-11-26 11:38:50.030509945 +0000 UTC m=+0.014679783 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:38:50 compute-0 ceph-mgr[75197]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:38:50 compute-0 ceph-mgr[75197]: [cephadm INFO root] Saving service mon spec with placement count:5
Nov 26 11:38:50 compute-0 ceph-mgr[75197]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Nov 26 11:38:50 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 26 11:38:50 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:38:50 compute-0 hungry_newton[77094]: Scheduled mon update...
Nov 26 11:38:50 compute-0 systemd[1]: libpod-83a70e766a3c40a6cbfdf66e20e8f12d7f92c9d6e454c27248d4c7db84c202ef.scope: Deactivated successfully.
Nov 26 11:38:50 compute-0 podman[77056]: 2025-11-26 11:38:50.25950837 +0000 UTC m=+0.541924329 container died 83a70e766a3c40a6cbfdf66e20e8f12d7f92c9d6e454c27248d4c7db84c202ef (image=quay.io/ceph/ceph:v18, name=hungry_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:38:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa94c294a30834446e660b525ddd96e0fd05b24935d188889f202adb5310efec-merged.mount: Deactivated successfully.
Nov 26 11:38:50 compute-0 podman[77056]: 2025-11-26 11:38:50.281572992 +0000 UTC m=+0.563988941 container remove 83a70e766a3c40a6cbfdf66e20e8f12d7f92c9d6e454c27248d4c7db84c202ef (image=quay.io/ceph/ceph:v18, name=hungry_newton, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 26 11:38:50 compute-0 systemd[1]: libpod-conmon-83a70e766a3c40a6cbfdf66e20e8f12d7f92c9d6e454c27248d4c7db84c202ef.scope: Deactivated successfully.
Nov 26 11:38:50 compute-0 podman[77220]: 2025-11-26 11:38:50.320259899 +0000 UTC m=+0.024691899 container create eaaf507780fdf8bd862eaa515ef2cf1bf8826c7dbb4ab85ce2ef7ac4d67a89c8 (image=quay.io/ceph/ceph:v18, name=goofy_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 26 11:38:50 compute-0 systemd[1]: Started libpod-conmon-eaaf507780fdf8bd862eaa515ef2cf1bf8826c7dbb4ab85ce2ef7ac4d67a89c8.scope.
Nov 26 11:38:50 compute-0 wonderful_bassi[77202]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Nov 26 11:38:50 compute-0 podman[77171]: 2025-11-26 11:38:50.36258484 +0000 UTC m=+0.346754658 container died 66deafc35aaf31326925f2190d3d1173f5745046157af7994f7b93145ef79c7a (image=quay.io/ceph/ceph:v18, name=wonderful_bassi, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:38:50 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:38:50 compute-0 systemd[1]: libpod-66deafc35aaf31326925f2190d3d1173f5745046157af7994f7b93145ef79c7a.scope: Deactivated successfully.
Nov 26 11:38:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddab15500b7bb83b0978cd04e5f2bc4def083b3f33f59ef5015b3c7a5206a575/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddab15500b7bb83b0978cd04e5f2bc4def083b3f33f59ef5015b3c7a5206a575/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddab15500b7bb83b0978cd04e5f2bc4def083b3f33f59ef5015b3c7a5206a575/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:50 compute-0 podman[77220]: 2025-11-26 11:38:50.377599403 +0000 UTC m=+0.082031403 container init eaaf507780fdf8bd862eaa515ef2cf1bf8826c7dbb4ab85ce2ef7ac4d67a89c8 (image=quay.io/ceph/ceph:v18, name=goofy_poincare, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:38:50 compute-0 podman[77220]: 2025-11-26 11:38:50.382592903 +0000 UTC m=+0.087024903 container start eaaf507780fdf8bd862eaa515ef2cf1bf8826c7dbb4ab85ce2ef7ac4d67a89c8 (image=quay.io/ceph/ceph:v18, name=goofy_poincare, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:38:50 compute-0 podman[77220]: 2025-11-26 11:38:50.385669124 +0000 UTC m=+0.090101145 container attach eaaf507780fdf8bd862eaa515ef2cf1bf8826c7dbb4ab85ce2ef7ac4d67a89c8 (image=quay.io/ceph/ceph:v18, name=goofy_poincare, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:38:50 compute-0 podman[77171]: 2025-11-26 11:38:50.398013542 +0000 UTC m=+0.382183359 container remove 66deafc35aaf31326925f2190d3d1173f5745046157af7994f7b93145ef79c7a (image=quay.io/ceph/ceph:v18, name=wonderful_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:38:50 compute-0 systemd[1]: libpod-conmon-66deafc35aaf31326925f2190d3d1173f5745046157af7994f7b93145ef79c7a.scope: Deactivated successfully.
Nov 26 11:38:50 compute-0 podman[77220]: 2025-11-26 11:38:50.309918612 +0000 UTC m=+0.014350631 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:38:50 compute-0 sudo[77126]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:50 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0) v1
Nov 26 11:38:50 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:38:50 compute-0 sudo[77249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:38:50 compute-0 sudo[77249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:50 compute-0 sudo[77249]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:50 compute-0 sudo[77274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:38:50 compute-0 sudo[77274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:50 compute-0 sudo[77274]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:50 compute-0 sudo[77299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:38:50 compute-0 sudo[77299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:50 compute-0 sudo[77299]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:50 compute-0 sudo[77324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 26 11:38:50 compute-0 sudo[77324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:50 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:38:50 compute-0 ceph-mon[74928]: Added host compute-0
Nov 26 11:38:50 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 26 11:38:50 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:38:50 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:38:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-474665a5106965fe283fab0918831684884caaebfd7477c453bfc7f165a1b891-merged.mount: Deactivated successfully.
Nov 26 11:38:50 compute-0 sudo[77324]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:50 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 11:38:50 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:38:50 compute-0 sudo[77386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:38:50 compute-0 sudo[77386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:50 compute-0 sudo[77386]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:50 compute-0 ceph-mgr[75197]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:38:50 compute-0 ceph-mgr[75197]: [cephadm INFO root] Saving service mgr spec with placement count:2
Nov 26 11:38:50 compute-0 ceph-mgr[75197]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Nov 26 11:38:50 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 26 11:38:50 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:38:50 compute-0 goofy_poincare[77234]: Scheduled mgr update...
Nov 26 11:38:50 compute-0 systemd[1]: libpod-eaaf507780fdf8bd862eaa515ef2cf1bf8826c7dbb4ab85ce2ef7ac4d67a89c8.scope: Deactivated successfully.
Nov 26 11:38:50 compute-0 podman[77220]: 2025-11-26 11:38:50.833937094 +0000 UTC m=+0.538369093 container died eaaf507780fdf8bd862eaa515ef2cf1bf8826c7dbb4ab85ce2ef7ac4d67a89c8 (image=quay.io/ceph/ceph:v18, name=goofy_poincare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 26 11:38:50 compute-0 sudo[77411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:38:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-ddab15500b7bb83b0978cd04e5f2bc4def083b3f33f59ef5015b3c7a5206a575-merged.mount: Deactivated successfully.
Nov 26 11:38:50 compute-0 sudo[77411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:50 compute-0 sudo[77411]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:50 compute-0 podman[77220]: 2025-11-26 11:38:50.862821741 +0000 UTC m=+0.567253740 container remove eaaf507780fdf8bd862eaa515ef2cf1bf8826c7dbb4ab85ce2ef7ac4d67a89c8 (image=quay.io/ceph/ceph:v18, name=goofy_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 26 11:38:50 compute-0 systemd[1]: libpod-conmon-eaaf507780fdf8bd862eaa515ef2cf1bf8826c7dbb4ab85ce2ef7ac4d67a89c8.scope: Deactivated successfully.
Nov 26 11:38:50 compute-0 sudo[77447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:38:50 compute-0 sudo[77447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:50 compute-0 sudo[77447]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:50 compute-0 podman[77457]: 2025-11-26 11:38:50.9057666 +0000 UTC m=+0.030388445 container create 3a54bc3eefe83b78d22c1959dfc3b9319da3304dfb7f7a895ac8df0ce5924235 (image=quay.io/ceph/ceph:v18, name=dreamy_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 26 11:38:50 compute-0 systemd[1]: Started libpod-conmon-3a54bc3eefe83b78d22c1959dfc3b9319da3304dfb7f7a895ac8df0ce5924235.scope.
Nov 26 11:38:50 compute-0 sudo[77481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 26 11:38:50 compute-0 sudo[77481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:50 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:38:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90b9dd1c501e4bb79ffe5bfbaeb888d44913df3ebf3f90ed026009fa6afe58ee/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90b9dd1c501e4bb79ffe5bfbaeb888d44913df3ebf3f90ed026009fa6afe58ee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90b9dd1c501e4bb79ffe5bfbaeb888d44913df3ebf3f90ed026009fa6afe58ee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:50 compute-0 podman[77457]: 2025-11-26 11:38:50.966215758 +0000 UTC m=+0.090837623 container init 3a54bc3eefe83b78d22c1959dfc3b9319da3304dfb7f7a895ac8df0ce5924235 (image=quay.io/ceph/ceph:v18, name=dreamy_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 26 11:38:50 compute-0 podman[77457]: 2025-11-26 11:38:50.970744272 +0000 UTC m=+0.095366117 container start 3a54bc3eefe83b78d22c1959dfc3b9319da3304dfb7f7a895ac8df0ce5924235 (image=quay.io/ceph/ceph:v18, name=dreamy_williamson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Nov 26 11:38:50 compute-0 podman[77457]: 2025-11-26 11:38:50.971866047 +0000 UTC m=+0.096487893 container attach 3a54bc3eefe83b78d22c1959dfc3b9319da3304dfb7f7a895ac8df0ce5924235 (image=quay.io/ceph/ceph:v18, name=dreamy_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 26 11:38:50 compute-0 podman[77457]: 2025-11-26 11:38:50.891553429 +0000 UTC m=+0.016175294 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:38:51 compute-0 podman[77571]: 2025-11-26 11:38:51.24566383 +0000 UTC m=+0.039972564 container exec 810eaed6cbde00330c0646cedaef5c2ae94579236d67ba42b8ef02d055d04ad5 (image=quay.io/ceph/ceph:v18, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:38:51 compute-0 ceph-mgr[75197]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 26 11:38:51 compute-0 ceph-mgr[75197]: log_channel(audit) log [DBG] : from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:38:51 compute-0 ceph-mgr[75197]: [cephadm INFO root] Saving service crash spec with placement *
Nov 26 11:38:51 compute-0 ceph-mgr[75197]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Nov 26 11:38:51 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 26 11:38:51 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:38:51 compute-0 dreamy_williamson[77509]: Scheduled crash update...
Nov 26 11:38:51 compute-0 systemd[1]: libpod-3a54bc3eefe83b78d22c1959dfc3b9319da3304dfb7f7a895ac8df0ce5924235.scope: Deactivated successfully.
Nov 26 11:38:51 compute-0 conmon[77509]: conmon 3a54bc3eefe83b78d22c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3a54bc3eefe83b78d22c1959dfc3b9319da3304dfb7f7a895ac8df0ce5924235.scope/container/memory.events
Nov 26 11:38:51 compute-0 podman[77457]: 2025-11-26 11:38:51.418064311 +0000 UTC m=+0.542686166 container died 3a54bc3eefe83b78d22c1959dfc3b9319da3304dfb7f7a895ac8df0ce5924235 (image=quay.io/ceph/ceph:v18, name=dreamy_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 26 11:38:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-90b9dd1c501e4bb79ffe5bfbaeb888d44913df3ebf3f90ed026009fa6afe58ee-merged.mount: Deactivated successfully.
Nov 26 11:38:51 compute-0 podman[77457]: 2025-11-26 11:38:51.441826825 +0000 UTC m=+0.566448671 container remove 3a54bc3eefe83b78d22c1959dfc3b9319da3304dfb7f7a895ac8df0ce5924235 (image=quay.io/ceph/ceph:v18, name=dreamy_williamson, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Nov 26 11:38:51 compute-0 systemd[1]: libpod-conmon-3a54bc3eefe83b78d22c1959dfc3b9319da3304dfb7f7a895ac8df0ce5924235.scope: Deactivated successfully.
Nov 26 11:38:51 compute-0 podman[77617]: 2025-11-26 11:38:51.483193749 +0000 UTC m=+0.025943891 container create 4f871c6b0ace7c886372f68273f47962c1d6dd76ba0fef9db5a907d7b5fd090a (image=quay.io/ceph/ceph:v18, name=naughty_bhaskara, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 26 11:38:51 compute-0 podman[77571]: 2025-11-26 11:38:51.493429768 +0000 UTC m=+0.287738502 container exec_died 810eaed6cbde00330c0646cedaef5c2ae94579236d67ba42b8ef02d055d04ad5 (image=quay.io/ceph/ceph:v18, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True)
Nov 26 11:38:51 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054711 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:38:51 compute-0 systemd[1]: Started libpod-conmon-4f871c6b0ace7c886372f68273f47962c1d6dd76ba0fef9db5a907d7b5fd090a.scope.
Nov 26 11:38:51 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:38:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec957759bb3c8ce927fbcb78998c7732fe5db50cef2d609751294081e2d3d7d5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec957759bb3c8ce927fbcb78998c7732fe5db50cef2d609751294081e2d3d7d5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec957759bb3c8ce927fbcb78998c7732fe5db50cef2d609751294081e2d3d7d5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:51 compute-0 podman[77617]: 2025-11-26 11:38:51.52554009 +0000 UTC m=+0.068290232 container init 4f871c6b0ace7c886372f68273f47962c1d6dd76ba0fef9db5a907d7b5fd090a (image=quay.io/ceph/ceph:v18, name=naughty_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:38:51 compute-0 podman[77617]: 2025-11-26 11:38:51.533047171 +0000 UTC m=+0.075797313 container start 4f871c6b0ace7c886372f68273f47962c1d6dd76ba0fef9db5a907d7b5fd090a (image=quay.io/ceph/ceph:v18, name=naughty_bhaskara, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:38:51 compute-0 podman[77617]: 2025-11-26 11:38:51.534990807 +0000 UTC m=+0.077740950 container attach 4f871c6b0ace7c886372f68273f47962c1d6dd76ba0fef9db5a907d7b5fd090a (image=quay.io/ceph/ceph:v18, name=naughty_bhaskara, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:38:51 compute-0 podman[77617]: 2025-11-26 11:38:51.472281514 +0000 UTC m=+0.015031677 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:38:51 compute-0 sudo[77481]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:51 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 11:38:51 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:38:51 compute-0 sudo[77660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:38:51 compute-0 sudo[77660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:51 compute-0 sudo[77660]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:51 compute-0 sudo[77685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:38:51 compute-0 sudo[77685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:51 compute-0 sudo[77685]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:51 compute-0 ceph-mon[74928]: from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:38:51 compute-0 ceph-mon[74928]: Saving service mon spec with placement count:5
Nov 26 11:38:51 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:38:51 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:38:51 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:38:51 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:38:51 compute-0 sudo[77710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:38:51 compute-0 sudo[77710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:51 compute-0 sudo[77710]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:51 compute-0 sudo[77735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 26 11:38:51 compute-0 sudo[77735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:51 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 77789 (sysctl)
Nov 26 11:38:51 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Nov 26 11:38:51 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Nov 26 11:38:51 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0) v1
Nov 26 11:38:51 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2282215554' entity='client.admin' 
Nov 26 11:38:51 compute-0 podman[77617]: 2025-11-26 11:38:51.984521377 +0000 UTC m=+0.527271519 container died 4f871c6b0ace7c886372f68273f47962c1d6dd76ba0fef9db5a907d7b5fd090a (image=quay.io/ceph/ceph:v18, name=naughty_bhaskara, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 26 11:38:51 compute-0 systemd[1]: libpod-4f871c6b0ace7c886372f68273f47962c1d6dd76ba0fef9db5a907d7b5fd090a.scope: Deactivated successfully.
Nov 26 11:38:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec957759bb3c8ce927fbcb78998c7732fe5db50cef2d609751294081e2d3d7d5-merged.mount: Deactivated successfully.
Nov 26 11:38:52 compute-0 podman[77617]: 2025-11-26 11:38:52.009569167 +0000 UTC m=+0.552319308 container remove 4f871c6b0ace7c886372f68273f47962c1d6dd76ba0fef9db5a907d7b5fd090a (image=quay.io/ceph/ceph:v18, name=naughty_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 26 11:38:52 compute-0 systemd[1]: libpod-conmon-4f871c6b0ace7c886372f68273f47962c1d6dd76ba0fef9db5a907d7b5fd090a.scope: Deactivated successfully.
Nov 26 11:38:52 compute-0 podman[77809]: 2025-11-26 11:38:52.055043277 +0000 UTC m=+0.030923013 container create 49adf82714580d17b1c6ea51a875896e6e00f777edb7ebabff92091dc09568be (image=quay.io/ceph/ceph:v18, name=happy_hypatia, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:38:52 compute-0 systemd[1]: Started libpod-conmon-49adf82714580d17b1c6ea51a875896e6e00f777edb7ebabff92091dc09568be.scope.
Nov 26 11:38:52 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:38:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfa8864fbf76f0014e248ff129dda6b8bff015dbff5d63eee03cf676bdb501d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfa8864fbf76f0014e248ff129dda6b8bff015dbff5d63eee03cf676bdb501d3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfa8864fbf76f0014e248ff129dda6b8bff015dbff5d63eee03cf676bdb501d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:52 compute-0 sudo[77735]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:52 compute-0 podman[77809]: 2025-11-26 11:38:52.109400806 +0000 UTC m=+0.085280541 container init 49adf82714580d17b1c6ea51a875896e6e00f777edb7ebabff92091dc09568be (image=quay.io/ceph/ceph:v18, name=happy_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 26 11:38:52 compute-0 podman[77809]: 2025-11-26 11:38:52.11449788 +0000 UTC m=+0.090377616 container start 49adf82714580d17b1c6ea51a875896e6e00f777edb7ebabff92091dc09568be (image=quay.io/ceph/ceph:v18, name=happy_hypatia, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:38:52 compute-0 podman[77809]: 2025-11-26 11:38:52.115533003 +0000 UTC m=+0.091412739 container attach 49adf82714580d17b1c6ea51a875896e6e00f777edb7ebabff92091dc09568be (image=quay.io/ceph/ceph:v18, name=happy_hypatia, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:38:52 compute-0 podman[77809]: 2025-11-26 11:38:52.042446064 +0000 UTC m=+0.018325820 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:38:52 compute-0 sudo[77841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:38:52 compute-0 sudo[77841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:52 compute-0 sudo[77841]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:52 compute-0 sudo[77866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:38:52 compute-0 sudo[77866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:52 compute-0 sudo[77866]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:52 compute-0 sudo[77891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:38:52 compute-0 sudo[77891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:52 compute-0 sudo[77891]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:52 compute-0 sudo[77916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Nov 26 11:38:52 compute-0 sudo[77916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:52 compute-0 sudo[77916]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:52 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 11:38:52 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:38:52 compute-0 sudo[77977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:38:52 compute-0 sudo[77977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:52 compute-0 sudo[77977]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:52 compute-0 sudo[78002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:38:52 compute-0 sudo[78002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:52 compute-0 sudo[78002]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:52 compute-0 ceph-mgr[75197]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:38:52 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0) v1
Nov 26 11:38:52 compute-0 sudo[78027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:38:52 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:38:52 compute-0 sudo[78027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:52 compute-0 sudo[78027]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:52 compute-0 systemd[1]: libpod-49adf82714580d17b1c6ea51a875896e6e00f777edb7ebabff92091dc09568be.scope: Deactivated successfully.
Nov 26 11:38:52 compute-0 podman[77809]: 2025-11-26 11:38:52.571434981 +0000 UTC m=+0.547314727 container died 49adf82714580d17b1c6ea51a875896e6e00f777edb7ebabff92091dc09568be (image=quay.io/ceph/ceph:v18, name=happy_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Nov 26 11:38:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-dfa8864fbf76f0014e248ff129dda6b8bff015dbff5d63eee03cf676bdb501d3-merged.mount: Deactivated successfully.
Nov 26 11:38:52 compute-0 podman[77809]: 2025-11-26 11:38:52.592423614 +0000 UTC m=+0.568303350 container remove 49adf82714580d17b1c6ea51a875896e6e00f777edb7ebabff92091dc09568be (image=quay.io/ceph/ceph:v18, name=happy_hypatia, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:38:52 compute-0 systemd[1]: libpod-conmon-49adf82714580d17b1c6ea51a875896e6e00f777edb7ebabff92091dc09568be.scope: Deactivated successfully.
Nov 26 11:38:52 compute-0 sudo[78054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -- inventory --format=json-pretty --filter-for-batch
Nov 26 11:38:52 compute-0 sudo[78054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:52 compute-0 podman[78086]: 2025-11-26 11:38:52.630995325 +0000 UTC m=+0.025370710 container create 832d7ada5306d5897fb005e3e42eae58eea111b13a7ec957eb9f70ab17657976 (image=quay.io/ceph/ceph:v18, name=brave_varahamihira, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 26 11:38:52 compute-0 systemd[1]: Started libpod-conmon-832d7ada5306d5897fb005e3e42eae58eea111b13a7ec957eb9f70ab17657976.scope.
Nov 26 11:38:52 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:38:52 compute-0 ceph-mon[74928]: from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:38:52 compute-0 ceph-mon[74928]: Saving service mgr spec with placement count:2
Nov 26 11:38:52 compute-0 ceph-mon[74928]: from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:38:52 compute-0 ceph-mon[74928]: Saving service crash spec with placement *
Nov 26 11:38:52 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/2282215554' entity='client.admin' 
Nov 26 11:38:52 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:38:52 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:38:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48bcb099f816f3a1caa5412bfb00c82cdc3f54fe342d5ebdf51cb97ea8d1acf7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48bcb099f816f3a1caa5412bfb00c82cdc3f54fe342d5ebdf51cb97ea8d1acf7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48bcb099f816f3a1caa5412bfb00c82cdc3f54fe342d5ebdf51cb97ea8d1acf7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:52 compute-0 podman[78086]: 2025-11-26 11:38:52.6832578 +0000 UTC m=+0.077633206 container init 832d7ada5306d5897fb005e3e42eae58eea111b13a7ec957eb9f70ab17657976 (image=quay.io/ceph/ceph:v18, name=brave_varahamihira, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:38:52 compute-0 podman[78086]: 2025-11-26 11:38:52.688083994 +0000 UTC m=+0.082459381 container start 832d7ada5306d5897fb005e3e42eae58eea111b13a7ec957eb9f70ab17657976 (image=quay.io/ceph/ceph:v18, name=brave_varahamihira, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:38:52 compute-0 podman[78086]: 2025-11-26 11:38:52.689154494 +0000 UTC m=+0.083529899 container attach 832d7ada5306d5897fb005e3e42eae58eea111b13a7ec957eb9f70ab17657976 (image=quay.io/ceph/ceph:v18, name=brave_varahamihira, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 26 11:38:52 compute-0 podman[78086]: 2025-11-26 11:38:52.621199866 +0000 UTC m=+0.015575272 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:38:52 compute-0 podman[78136]: 2025-11-26 11:38:52.840824679 +0000 UTC m=+0.027373438 container create 129c8952a856c42d5b2e18e460714395b4fd0638a953647c3262b2ca757dbbc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_dijkstra, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 26 11:38:52 compute-0 systemd[1]: Started libpod-conmon-129c8952a856c42d5b2e18e460714395b4fd0638a953647c3262b2ca757dbbc2.scope.
Nov 26 11:38:52 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:38:52 compute-0 podman[78136]: 2025-11-26 11:38:52.877535138 +0000 UTC m=+0.064083897 container init 129c8952a856c42d5b2e18e460714395b4fd0638a953647c3262b2ca757dbbc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:38:52 compute-0 podman[78136]: 2025-11-26 11:38:52.88173949 +0000 UTC m=+0.068288248 container start 129c8952a856c42d5b2e18e460714395b4fd0638a953647c3262b2ca757dbbc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_dijkstra, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 26 11:38:52 compute-0 romantic_dijkstra[78149]: 167 167
Nov 26 11:38:52 compute-0 systemd[1]: libpod-129c8952a856c42d5b2e18e460714395b4fd0638a953647c3262b2ca757dbbc2.scope: Deactivated successfully.
Nov 26 11:38:52 compute-0 podman[78136]: 2025-11-26 11:38:52.884923035 +0000 UTC m=+0.071471814 container attach 129c8952a856c42d5b2e18e460714395b4fd0638a953647c3262b2ca757dbbc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Nov 26 11:38:52 compute-0 podman[78136]: 2025-11-26 11:38:52.885084579 +0000 UTC m=+0.071633337 container died 129c8952a856c42d5b2e18e460714395b4fd0638a953647c3262b2ca757dbbc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_dijkstra, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:38:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-033f93681be631ecbf951d93cce7d96f9251cab915c76ee1e66059f350b23e49-merged.mount: Deactivated successfully.
Nov 26 11:38:52 compute-0 podman[78136]: 2025-11-26 11:38:52.904445341 +0000 UTC m=+0.090994099 container remove 129c8952a856c42d5b2e18e460714395b4fd0638a953647c3262b2ca757dbbc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_dijkstra, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:38:52 compute-0 podman[78136]: 2025-11-26 11:38:52.829560141 +0000 UTC m=+0.016108909 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:38:52 compute-0 systemd[1]: libpod-conmon-129c8952a856c42d5b2e18e460714395b4fd0638a953647c3262b2ca757dbbc2.scope: Deactivated successfully.
Nov 26 11:38:53 compute-0 ceph-mgr[75197]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:38:53 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 26 11:38:53 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:38:53 compute-0 ceph-mgr[75197]: [cephadm INFO root] Added label _admin to host compute-0
Nov 26 11:38:53 compute-0 ceph-mgr[75197]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Nov 26 11:38:53 compute-0 brave_varahamihira[78102]: Added label _admin to host compute-0
Nov 26 11:38:53 compute-0 systemd[1]: libpod-832d7ada5306d5897fb005e3e42eae58eea111b13a7ec957eb9f70ab17657976.scope: Deactivated successfully.
Nov 26 11:38:53 compute-0 podman[78185]: 2025-11-26 11:38:53.158902922 +0000 UTC m=+0.015862323 container died 832d7ada5306d5897fb005e3e42eae58eea111b13a7ec957eb9f70ab17657976 (image=quay.io/ceph/ceph:v18, name=brave_varahamihira, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Nov 26 11:38:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-48bcb099f816f3a1caa5412bfb00c82cdc3f54fe342d5ebdf51cb97ea8d1acf7-merged.mount: Deactivated successfully.
Nov 26 11:38:53 compute-0 podman[78185]: 2025-11-26 11:38:53.17895644 +0000 UTC m=+0.035915841 container remove 832d7ada5306d5897fb005e3e42eae58eea111b13a7ec957eb9f70ab17657976 (image=quay.io/ceph/ceph:v18, name=brave_varahamihira, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:38:53 compute-0 systemd[1]: libpod-conmon-832d7ada5306d5897fb005e3e42eae58eea111b13a7ec957eb9f70ab17657976.scope: Deactivated successfully.
Nov 26 11:38:53 compute-0 podman[78197]: 2025-11-26 11:38:53.221395606 +0000 UTC m=+0.026353373 container create 1ee360826327940ed01542358aba20e3b99f9596d5fb84fc4615fa8635efd950 (image=quay.io/ceph/ceph:v18, name=optimistic_dhawan, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 26 11:38:53 compute-0 systemd[1]: Started libpod-conmon-1ee360826327940ed01542358aba20e3b99f9596d5fb84fc4615fa8635efd950.scope.
Nov 26 11:38:53 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:38:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8106386fc7347ac281133cfea9887c67a4366b6dc35d6f7cf218ae602a17f863/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8106386fc7347ac281133cfea9887c67a4366b6dc35d6f7cf218ae602a17f863/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8106386fc7347ac281133cfea9887c67a4366b6dc35d6f7cf218ae602a17f863/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:53 compute-0 podman[78197]: 2025-11-26 11:38:53.26004906 +0000 UTC m=+0.065006837 container init 1ee360826327940ed01542358aba20e3b99f9596d5fb84fc4615fa8635efd950 (image=quay.io/ceph/ceph:v18, name=optimistic_dhawan, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:38:53 compute-0 podman[78197]: 2025-11-26 11:38:53.264482804 +0000 UTC m=+0.069440561 container start 1ee360826327940ed01542358aba20e3b99f9596d5fb84fc4615fa8635efd950 (image=quay.io/ceph/ceph:v18, name=optimistic_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:38:53 compute-0 podman[78197]: 2025-11-26 11:38:53.268920384 +0000 UTC m=+0.073878161 container attach 1ee360826327940ed01542358aba20e3b99f9596d5fb84fc4615fa8635efd950 (image=quay.io/ceph/ceph:v18, name=optimistic_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:38:53 compute-0 podman[78197]: 2025-11-26 11:38:53.21057846 +0000 UTC m=+0.015536238 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:38:53 compute-0 ceph-mgr[75197]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 26 11:38:53 compute-0 ceph-mon[74928]: from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:38:53 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:38:53 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target_autotune}] v 0) v1
Nov 26 11:38:53 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/797504925' entity='client.admin' 
Nov 26 11:38:53 compute-0 systemd[1]: libpod-1ee360826327940ed01542358aba20e3b99f9596d5fb84fc4615fa8635efd950.scope: Deactivated successfully.
Nov 26 11:38:53 compute-0 conmon[78210]: conmon 1ee360826327940ed015 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1ee360826327940ed01542358aba20e3b99f9596d5fb84fc4615fa8635efd950.scope/container/memory.events
Nov 26 11:38:53 compute-0 podman[78197]: 2025-11-26 11:38:53.698967381 +0000 UTC m=+0.503925138 container died 1ee360826327940ed01542358aba20e3b99f9596d5fb84fc4615fa8635efd950 (image=quay.io/ceph/ceph:v18, name=optimistic_dhawan, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 26 11:38:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-8106386fc7347ac281133cfea9887c67a4366b6dc35d6f7cf218ae602a17f863-merged.mount: Deactivated successfully.
Nov 26 11:38:53 compute-0 podman[78197]: 2025-11-26 11:38:53.719082957 +0000 UTC m=+0.524040714 container remove 1ee360826327940ed01542358aba20e3b99f9596d5fb84fc4615fa8635efd950 (image=quay.io/ceph/ceph:v18, name=optimistic_dhawan, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 26 11:38:53 compute-0 systemd[1]: libpod-conmon-1ee360826327940ed01542358aba20e3b99f9596d5fb84fc4615fa8635efd950.scope: Deactivated successfully.
Nov 26 11:38:53 compute-0 podman[78244]: 2025-11-26 11:38:53.758107652 +0000 UTC m=+0.024368630 container create 9d178f666f8164387b16bec257267b2190e8fd4651253e7096d2ee3505a64b1c (image=quay.io/ceph/ceph:v18, name=amazing_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 26 11:38:53 compute-0 systemd[1]: Started libpod-conmon-9d178f666f8164387b16bec257267b2190e8fd4651253e7096d2ee3505a64b1c.scope.
Nov 26 11:38:53 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:38:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0df7003e98e20382e238eaa5641db3e9485d2c97e8303609b34cecac311b295/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0df7003e98e20382e238eaa5641db3e9485d2c97e8303609b34cecac311b295/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0df7003e98e20382e238eaa5641db3e9485d2c97e8303609b34cecac311b295/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:53 compute-0 podman[78244]: 2025-11-26 11:38:53.816580352 +0000 UTC m=+0.082841340 container init 9d178f666f8164387b16bec257267b2190e8fd4651253e7096d2ee3505a64b1c (image=quay.io/ceph/ceph:v18, name=amazing_colden, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 26 11:38:53 compute-0 podman[78244]: 2025-11-26 11:38:53.820738626 +0000 UTC m=+0.086999605 container start 9d178f666f8164387b16bec257267b2190e8fd4651253e7096d2ee3505a64b1c (image=quay.io/ceph/ceph:v18, name=amazing_colden, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 26 11:38:53 compute-0 podman[78244]: 2025-11-26 11:38:53.821868428 +0000 UTC m=+0.088129405 container attach 9d178f666f8164387b16bec257267b2190e8fd4651253e7096d2ee3505a64b1c (image=quay.io/ceph/ceph:v18, name=amazing_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 26 11:38:53 compute-0 podman[78244]: 2025-11-26 11:38:53.748424375 +0000 UTC m=+0.014685373 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:38:54 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0) v1
Nov 26 11:38:54 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/326484135' entity='client.admin' 
Nov 26 11:38:54 compute-0 amazing_colden[78259]: set mgr/dashboard/cluster/status
Nov 26 11:38:54 compute-0 systemd[1]: libpod-9d178f666f8164387b16bec257267b2190e8fd4651253e7096d2ee3505a64b1c.scope: Deactivated successfully.
Nov 26 11:38:54 compute-0 podman[78285]: 2025-11-26 11:38:54.382284478 +0000 UTC m=+0.018906806 container died 9d178f666f8164387b16bec257267b2190e8fd4651253e7096d2ee3505a64b1c (image=quay.io/ceph/ceph:v18, name=amazing_colden, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:38:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0df7003e98e20382e238eaa5641db3e9485d2c97e8303609b34cecac311b295-merged.mount: Deactivated successfully.
Nov 26 11:38:54 compute-0 podman[78285]: 2025-11-26 11:38:54.401674044 +0000 UTC m=+0.038296372 container remove 9d178f666f8164387b16bec257267b2190e8fd4651253e7096d2ee3505a64b1c (image=quay.io/ceph/ceph:v18, name=amazing_colden, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:38:54 compute-0 systemd[1]: libpod-conmon-9d178f666f8164387b16bec257267b2190e8fd4651253e7096d2ee3505a64b1c.scope: Deactivated successfully.
Nov 26 11:38:54 compute-0 sudo[74024]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:54 compute-0 podman[78304]: 2025-11-26 11:38:54.528673491 +0000 UTC m=+0.031108844 container create 5f1f0a7804864bd2299af34b5809338cb08cab58ac75107b99ef81bfc3f42612 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_pike, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:38:54 compute-0 systemd[1]: Started libpod-conmon-5f1f0a7804864bd2299af34b5809338cb08cab58ac75107b99ef81bfc3f42612.scope.
Nov 26 11:38:54 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:38:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f8221b6a7170eec8146cc94c45466ec6992b34dfa1e66e7329b736f5d220263/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f8221b6a7170eec8146cc94c45466ec6992b34dfa1e66e7329b736f5d220263/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f8221b6a7170eec8146cc94c45466ec6992b34dfa1e66e7329b736f5d220263/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f8221b6a7170eec8146cc94c45466ec6992b34dfa1e66e7329b736f5d220263/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:54 compute-0 podman[78304]: 2025-11-26 11:38:54.585675298 +0000 UTC m=+0.088110651 container init 5f1f0a7804864bd2299af34b5809338cb08cab58ac75107b99ef81bfc3f42612 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_pike, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:38:54 compute-0 podman[78304]: 2025-11-26 11:38:54.591398593 +0000 UTC m=+0.093833946 container start 5f1f0a7804864bd2299af34b5809338cb08cab58ac75107b99ef81bfc3f42612 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_pike, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 26 11:38:54 compute-0 podman[78304]: 2025-11-26 11:38:54.592672988 +0000 UTC m=+0.095108341 container attach 5f1f0a7804864bd2299af34b5809338cb08cab58ac75107b99ef81bfc3f42612 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_pike, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 26 11:38:54 compute-0 podman[78304]: 2025-11-26 11:38:54.515986639 +0000 UTC m=+0.018422011 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:38:54 compute-0 sudo[78346]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogulnjhotzfxekkjwwmusgsgahvtbojo ; /usr/bin/python3'
Nov 26 11:38:54 compute-0 sudo[78346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:38:54 compute-0 ceph-mon[74928]: from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:38:54 compute-0 ceph-mon[74928]: Added label _admin to host compute-0
Nov 26 11:38:54 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/797504925' entity='client.admin' 
Nov 26 11:38:54 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/326484135' entity='client.admin' 
Nov 26 11:38:54 compute-0 python3[78348]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:38:54 compute-0 podman[78349]: 2025-11-26 11:38:54.819584367 +0000 UTC m=+0.028054391 container create 2af62d7ea568b4ce928805eca75d09e68db6dc91ea2491ca3cde253aafab978c (image=quay.io/ceph/ceph:v18, name=stoic_rosalind, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 26 11:38:54 compute-0 systemd[1]: Started libpod-conmon-2af62d7ea568b4ce928805eca75d09e68db6dc91ea2491ca3cde253aafab978c.scope.
Nov 26 11:38:54 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:38:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2435eb82f14615a4b05b8571eb218832289814f04287933b01271f2e60900950/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2435eb82f14615a4b05b8571eb218832289814f04287933b01271f2e60900950/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:54 compute-0 podman[78349]: 2025-11-26 11:38:54.868175366 +0000 UTC m=+0.076645392 container init 2af62d7ea568b4ce928805eca75d09e68db6dc91ea2491ca3cde253aafab978c (image=quay.io/ceph/ceph:v18, name=stoic_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:38:54 compute-0 podman[78349]: 2025-11-26 11:38:54.876680941 +0000 UTC m=+0.085150966 container start 2af62d7ea568b4ce928805eca75d09e68db6dc91ea2491ca3cde253aafab978c (image=quay.io/ceph/ceph:v18, name=stoic_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 26 11:38:54 compute-0 podman[78349]: 2025-11-26 11:38:54.877938753 +0000 UTC m=+0.086408779 container attach 2af62d7ea568b4ce928805eca75d09e68db6dc91ea2491ca3cde253aafab978c (image=quay.io/ceph/ceph:v18, name=stoic_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:38:54 compute-0 podman[78349]: 2025-11-26 11:38:54.807550406 +0000 UTC m=+0.016020451 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:38:55 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0) v1
Nov 26 11:38:55 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3018252742' entity='client.admin' 
Nov 26 11:38:55 compute-0 systemd[1]: libpod-2af62d7ea568b4ce928805eca75d09e68db6dc91ea2491ca3cde253aafab978c.scope: Deactivated successfully.
Nov 26 11:38:55 compute-0 podman[78349]: 2025-11-26 11:38:55.309306903 +0000 UTC m=+0.517776928 container died 2af62d7ea568b4ce928805eca75d09e68db6dc91ea2491ca3cde253aafab978c (image=quay.io/ceph/ceph:v18, name=stoic_rosalind, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 26 11:38:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-2435eb82f14615a4b05b8571eb218832289814f04287933b01271f2e60900950-merged.mount: Deactivated successfully.
Nov 26 11:38:55 compute-0 podman[78349]: 2025-11-26 11:38:55.336464432 +0000 UTC m=+0.544934456 container remove 2af62d7ea568b4ce928805eca75d09e68db6dc91ea2491ca3cde253aafab978c (image=quay.io/ceph/ceph:v18, name=stoic_rosalind, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True)
Nov 26 11:38:55 compute-0 sudo[78346]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:55 compute-0 systemd[1]: libpod-conmon-2af62d7ea568b4ce928805eca75d09e68db6dc91ea2491ca3cde253aafab978c.scope: Deactivated successfully.
Nov 26 11:38:55 compute-0 ceph-mgr[75197]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 26 11:38:55 compute-0 busy_pike[78318]: [
Nov 26 11:38:55 compute-0 busy_pike[78318]:     {
Nov 26 11:38:55 compute-0 busy_pike[78318]:         "available": false,
Nov 26 11:38:55 compute-0 busy_pike[78318]:         "ceph_device": false,
Nov 26 11:38:55 compute-0 busy_pike[78318]:         "device_id": "QEMU_DVD-ROM_QM00001",
Nov 26 11:38:55 compute-0 busy_pike[78318]:         "lsm_data": {},
Nov 26 11:38:55 compute-0 busy_pike[78318]:         "lvs": [],
Nov 26 11:38:55 compute-0 busy_pike[78318]:         "path": "/dev/sr0",
Nov 26 11:38:55 compute-0 busy_pike[78318]:         "rejected_reasons": [
Nov 26 11:38:55 compute-0 busy_pike[78318]:             "Insufficient space (<5GB)",
Nov 26 11:38:55 compute-0 busy_pike[78318]:             "Has a FileSystem"
Nov 26 11:38:55 compute-0 busy_pike[78318]:         ],
Nov 26 11:38:55 compute-0 busy_pike[78318]:         "sys_api": {
Nov 26 11:38:55 compute-0 busy_pike[78318]:             "actuators": null,
Nov 26 11:38:55 compute-0 busy_pike[78318]:             "device_nodes": "sr0",
Nov 26 11:38:55 compute-0 busy_pike[78318]:             "devname": "sr0",
Nov 26 11:38:55 compute-0 busy_pike[78318]:             "human_readable_size": "474.00 KB",
Nov 26 11:38:55 compute-0 busy_pike[78318]:             "id_bus": "ata",
Nov 26 11:38:55 compute-0 busy_pike[78318]:             "model": "QEMU DVD-ROM",
Nov 26 11:38:55 compute-0 busy_pike[78318]:             "nr_requests": "64",
Nov 26 11:38:55 compute-0 busy_pike[78318]:             "parent": "/dev/sr0",
Nov 26 11:38:55 compute-0 busy_pike[78318]:             "partitions": {},
Nov 26 11:38:55 compute-0 busy_pike[78318]:             "path": "/dev/sr0",
Nov 26 11:38:55 compute-0 busy_pike[78318]:             "removable": "1",
Nov 26 11:38:55 compute-0 busy_pike[78318]:             "rev": "2.5+",
Nov 26 11:38:55 compute-0 busy_pike[78318]:             "ro": "0",
Nov 26 11:38:55 compute-0 busy_pike[78318]:             "rotational": "1",
Nov 26 11:38:55 compute-0 busy_pike[78318]:             "sas_address": "",
Nov 26 11:38:55 compute-0 busy_pike[78318]:             "sas_device_handle": "",
Nov 26 11:38:55 compute-0 busy_pike[78318]:             "scheduler_mode": "mq-deadline",
Nov 26 11:38:55 compute-0 busy_pike[78318]:             "sectors": 0,
Nov 26 11:38:55 compute-0 busy_pike[78318]:             "sectorsize": "2048",
Nov 26 11:38:55 compute-0 busy_pike[78318]:             "size": 485376.0,
Nov 26 11:38:55 compute-0 busy_pike[78318]:             "support_discard": "2048",
Nov 26 11:38:55 compute-0 busy_pike[78318]:             "type": "disk",
Nov 26 11:38:55 compute-0 busy_pike[78318]:             "vendor": "QEMU"
Nov 26 11:38:55 compute-0 busy_pike[78318]:         }
Nov 26 11:38:55 compute-0 busy_pike[78318]:     }
Nov 26 11:38:55 compute-0 busy_pike[78318]: ]
Nov 26 11:38:55 compute-0 podman[78304]: 2025-11-26 11:38:55.622392687 +0000 UTC m=+1.124828039 container died 5f1f0a7804864bd2299af34b5809338cb08cab58ac75107b99ef81bfc3f42612 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_pike, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 26 11:38:55 compute-0 systemd[1]: libpod-5f1f0a7804864bd2299af34b5809338cb08cab58ac75107b99ef81bfc3f42612.scope: Deactivated successfully.
Nov 26 11:38:55 compute-0 systemd[1]: libpod-5f1f0a7804864bd2299af34b5809338cb08cab58ac75107b99ef81bfc3f42612.scope: Consumed 1.010s CPU time.
Nov 26 11:38:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f8221b6a7170eec8146cc94c45466ec6992b34dfa1e66e7329b736f5d220263-merged.mount: Deactivated successfully.
Nov 26 11:38:55 compute-0 podman[78304]: 2025-11-26 11:38:55.653882199 +0000 UTC m=+1.156317550 container remove 5f1f0a7804864bd2299af34b5809338cb08cab58ac75107b99ef81bfc3f42612 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_pike, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 26 11:38:55 compute-0 systemd[1]: libpod-conmon-5f1f0a7804864bd2299af34b5809338cb08cab58ac75107b99ef81bfc3f42612.scope: Deactivated successfully.
Nov 26 11:38:55 compute-0 sudo[78054]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:55 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 11:38:55 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:38:55 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 11:38:55 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:38:55 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 11:38:55 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:38:55 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 11:38:55 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:38:55 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 26 11:38:55 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 26 11:38:55 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 11:38:55 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:38:55 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 11:38:55 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 11:38:55 compute-0 ceph-mgr[75197]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Nov 26 11:38:55 compute-0 ceph-mgr[75197]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Nov 26 11:38:55 compute-0 sudo[80024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:38:55 compute-0 sudo[80024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:55 compute-0 sudo[80024]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:55 compute-0 sudo[80049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 26 11:38:55 compute-0 sudo[80049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:55 compute-0 sudo[80049]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:55 compute-0 sudo[80074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:38:55 compute-0 sudo[80074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:55 compute-0 sudo[80074]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:55 compute-0 sudo[80122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-ebab460c-3fd7-5f66-aa87-e10c143123f7/etc/ceph
Nov 26 11:38:55 compute-0 sudo[80122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:55 compute-0 sudo[80122]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:55 compute-0 sudo[80171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:38:55 compute-0 sudo[80171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:55 compute-0 sudo[80171]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:55 compute-0 sudo[80221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-faabkgjwjarutsytwlkgufdbcyssrtrx ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764157135.589509-36859-135418642701145/async_wrapper.py j71392259781 30 /home/zuul/.ansible/tmp/ansible-tmp-1764157135.589509-36859-135418642701145/AnsiballZ_command.py _'
Nov 26 11:38:55 compute-0 sudo[80221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:38:55 compute-0 sudo[80222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-ebab460c-3fd7-5f66-aa87-e10c143123f7/etc/ceph/ceph.conf.new
Nov 26 11:38:55 compute-0 sudo[80222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:55 compute-0 sudo[80222]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:55 compute-0 sudo[80249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:38:55 compute-0 sudo[80249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:55 compute-0 sudo[80249]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:56 compute-0 sudo[80274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-ebab460c-3fd7-5f66-aa87-e10c143123f7
Nov 26 11:38:56 compute-0 sudo[80274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:56 compute-0 sudo[80274]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:56 compute-0 ansible-async_wrapper.py[80236]: Invoked with j71392259781 30 /home/zuul/.ansible/tmp/ansible-tmp-1764157135.589509-36859-135418642701145/AnsiballZ_command.py _
Nov 26 11:38:56 compute-0 ansible-async_wrapper.py[80315]: Starting module and watcher
Nov 26 11:38:56 compute-0 ansible-async_wrapper.py[80315]: Start watching 80319 (30)
Nov 26 11:38:56 compute-0 ansible-async_wrapper.py[80319]: Start module (80319)
Nov 26 11:38:56 compute-0 ansible-async_wrapper.py[80236]: Return async_wrapper task started.
Nov 26 11:38:56 compute-0 sudo[80221]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:56 compute-0 sudo[80299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:38:56 compute-0 sudo[80299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:56 compute-0 sudo[80299]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:56 compute-0 sudo[80329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-ebab460c-3fd7-5f66-aa87-e10c143123f7/etc/ceph/ceph.conf.new
Nov 26 11:38:56 compute-0 sudo[80329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:56 compute-0 sudo[80329]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:56 compute-0 python3[80325]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:38:56 compute-0 sudo[80377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:38:56 compute-0 sudo[80377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:56 compute-0 sudo[80377]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:56 compute-0 podman[80390]: 2025-11-26 11:38:56.201419323 +0000 UTC m=+0.030274028 container create 743b555694b7be97b972cf97dc02751aa13385f29af86c6220189738b6d56b2e (image=quay.io/ceph/ceph:v18, name=dazzling_mcnulty, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Nov 26 11:38:56 compute-0 systemd[1]: Started libpod-conmon-743b555694b7be97b972cf97dc02751aa13385f29af86c6220189738b6d56b2e.scope.
Nov 26 11:38:56 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:38:56 compute-0 sudo[80411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-ebab460c-3fd7-5f66-aa87-e10c143123f7/etc/ceph/ceph.conf.new
Nov 26 11:38:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c8fc64c2421b56ec7520403ae9f10b9eb51d1605449b7d8dd490dda0820f798/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c8fc64c2421b56ec7520403ae9f10b9eb51d1605449b7d8dd490dda0820f798/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:56 compute-0 sudo[80411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:56 compute-0 sudo[80411]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:56 compute-0 podman[80390]: 2025-11-26 11:38:56.250134618 +0000 UTC m=+0.078989333 container init 743b555694b7be97b972cf97dc02751aa13385f29af86c6220189738b6d56b2e (image=quay.io/ceph/ceph:v18, name=dazzling_mcnulty, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:38:56 compute-0 podman[80390]: 2025-11-26 11:38:56.256359451 +0000 UTC m=+0.085214155 container start 743b555694b7be97b972cf97dc02751aa13385f29af86c6220189738b6d56b2e (image=quay.io/ceph/ceph:v18, name=dazzling_mcnulty, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 26 11:38:56 compute-0 podman[80390]: 2025-11-26 11:38:56.257718625 +0000 UTC m=+0.086573331 container attach 743b555694b7be97b972cf97dc02751aa13385f29af86c6220189738b6d56b2e (image=quay.io/ceph/ceph:v18, name=dazzling_mcnulty, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:38:56 compute-0 sudo[80443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:38:56 compute-0 sudo[80443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:56 compute-0 podman[80390]: 2025-11-26 11:38:56.190997745 +0000 UTC m=+0.019852471 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:38:56 compute-0 sudo[80443]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:56 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3018252742' entity='client.admin' 
Nov 26 11:38:56 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:38:56 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:38:56 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:38:56 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:38:56 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 26 11:38:56 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:38:56 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 11:38:56 compute-0 ceph-mon[74928]: Updating compute-0:/etc/ceph/ceph.conf
Nov 26 11:38:56 compute-0 sudo[80468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-ebab460c-3fd7-5f66-aa87-e10c143123f7/etc/ceph/ceph.conf.new
Nov 26 11:38:56 compute-0 sudo[80468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:56 compute-0 sudo[80468]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:56 compute-0 sudo[80493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:38:56 compute-0 sudo[80493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:56 compute-0 sudo[80493]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:56 compute-0 sudo[80518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-ebab460c-3fd7-5f66-aa87-e10c143123f7/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Nov 26 11:38:56 compute-0 sudo[80518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:56 compute-0 sudo[80518]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:56 compute-0 ceph-mgr[75197]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/config/ceph.conf
Nov 26 11:38:56 compute-0 ceph-mgr[75197]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/config/ceph.conf
Nov 26 11:38:56 compute-0 sudo[80543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:38:56 compute-0 sudo[80543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:56 compute-0 sudo[80543]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:56 compute-0 sudo[80568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/config
Nov 26 11:38:56 compute-0 sudo[80568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:56 compute-0 sudo[80568]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:56 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:38:56 compute-0 sudo[80593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:38:56 compute-0 sudo[80593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:56 compute-0 sudo[80593]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:56 compute-0 sudo[80637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-ebab460c-3fd7-5f66-aa87-e10c143123f7/var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/config
Nov 26 11:38:56 compute-0 sudo[80637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:56 compute-0 sudo[80637]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:56 compute-0 sudo[80662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:38:56 compute-0 sudo[80662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:56 compute-0 sudo[80662]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:56 compute-0 sudo[80687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-ebab460c-3fd7-5f66-aa87-e10c143123f7/var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/config/ceph.conf.new
Nov 26 11:38:56 compute-0 sudo[80687]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:56 compute-0 sudo[80687]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:56 compute-0 sudo[80712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:38:56 compute-0 sudo[80712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:56 compute-0 sudo[80712]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:56 compute-0 ceph-mgr[75197]: log_channel(audit) log [DBG] : from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 26 11:38:56 compute-0 dazzling_mcnulty[80435]: 
Nov 26 11:38:56 compute-0 dazzling_mcnulty[80435]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 26 11:38:56 compute-0 systemd[1]: libpod-743b555694b7be97b972cf97dc02751aa13385f29af86c6220189738b6d56b2e.scope: Deactivated successfully.
Nov 26 11:38:56 compute-0 podman[80390]: 2025-11-26 11:38:56.701937384 +0000 UTC m=+0.530792090 container died 743b555694b7be97b972cf97dc02751aa13385f29af86c6220189738b6d56b2e (image=quay.io/ceph/ceph:v18, name=dazzling_mcnulty, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:38:56 compute-0 sudo[80737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-ebab460c-3fd7-5f66-aa87-e10c143123f7
Nov 26 11:38:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c8fc64c2421b56ec7520403ae9f10b9eb51d1605449b7d8dd490dda0820f798-merged.mount: Deactivated successfully.
Nov 26 11:38:56 compute-0 sudo[80737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:56 compute-0 sudo[80737]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:56 compute-0 podman[80390]: 2025-11-26 11:38:56.729055959 +0000 UTC m=+0.557910665 container remove 743b555694b7be97b972cf97dc02751aa13385f29af86c6220189738b6d56b2e (image=quay.io/ceph/ceph:v18, name=dazzling_mcnulty, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 26 11:38:56 compute-0 systemd[1]: libpod-conmon-743b555694b7be97b972cf97dc02751aa13385f29af86c6220189738b6d56b2e.scope: Deactivated successfully.
Nov 26 11:38:56 compute-0 ansible-async_wrapper.py[80319]: Module complete (80319)
Nov 26 11:38:56 compute-0 sudo[80771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:38:56 compute-0 sudo[80771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:56 compute-0 sudo[80771]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:56 compute-0 sudo[80800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-ebab460c-3fd7-5f66-aa87-e10c143123f7/var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/config/ceph.conf.new
Nov 26 11:38:56 compute-0 sudo[80800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:56 compute-0 sudo[80800]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:56 compute-0 sudo[80848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:38:56 compute-0 sudo[80848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:56 compute-0 sudo[80848]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:56 compute-0 sudo[80873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-ebab460c-3fd7-5f66-aa87-e10c143123f7/var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/config/ceph.conf.new
Nov 26 11:38:56 compute-0 sudo[80873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:56 compute-0 sudo[80873]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:56 compute-0 sudo[80898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:38:56 compute-0 sudo[80898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:56 compute-0 sudo[80898]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:56 compute-0 sudo[80923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-ebab460c-3fd7-5f66-aa87-e10c143123f7/var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/config/ceph.conf.new
Nov 26 11:38:56 compute-0 sudo[80923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:56 compute-0 sudo[80923]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:57 compute-0 sudo[80948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:38:57 compute-0 sudo[80948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:57 compute-0 sudo[80948]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:57 compute-0 sudo[80973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-ebab460c-3fd7-5f66-aa87-e10c143123f7/var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/config/ceph.conf.new /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/config/ceph.conf
Nov 26 11:38:57 compute-0 sudo[80973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:57 compute-0 sudo[80973]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:57 compute-0 ceph-mgr[75197]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 26 11:38:57 compute-0 ceph-mgr[75197]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 26 11:38:57 compute-0 sudo[81001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:38:57 compute-0 sudo[81001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:57 compute-0 sudo[81001]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:57 compute-0 sudo[81046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 26 11:38:57 compute-0 sudo[81046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:57 compute-0 sudo[81046]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:57 compute-0 sudo[81071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:38:57 compute-0 sudo[81071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:57 compute-0 sudo[81071]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:57 compute-0 sudo[81127]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yeuagqmsfjtdzythsldkqrezunpbvoem ; /usr/bin/python3'
Nov 26 11:38:57 compute-0 sudo[81127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:38:57 compute-0 sudo[81110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-ebab460c-3fd7-5f66-aa87-e10c143123f7/etc/ceph
Nov 26 11:38:57 compute-0 sudo[81110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:57 compute-0 sudo[81110]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:57 compute-0 sudo[81147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:38:57 compute-0 sudo[81147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:57 compute-0 sudo[81147]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:57 compute-0 ceph-mon[74928]: Updating compute-0:/var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/config/ceph.conf
Nov 26 11:38:57 compute-0 ceph-mon[74928]: from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 26 11:38:57 compute-0 sudo[81172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-ebab460c-3fd7-5f66-aa87-e10c143123f7/etc/ceph/ceph.client.admin.keyring.new
Nov 26 11:38:57 compute-0 sudo[81172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:57 compute-0 sudo[81172]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:57 compute-0 python3[81144]: ansible-ansible.legacy.async_status Invoked with jid=j71392259781.80236 mode=status _async_dir=/root/.ansible_async
Nov 26 11:38:57 compute-0 sudo[81127]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:57 compute-0 sudo[81197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:38:57 compute-0 sudo[81197]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:57 compute-0 sudo[81197]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:57 compute-0 ceph-mgr[75197]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 26 11:38:57 compute-0 sudo[81225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-ebab460c-3fd7-5f66-aa87-e10c143123f7
Nov 26 11:38:57 compute-0 sudo[81225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:57 compute-0 sudo[81225]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:57 compute-0 sudo[81311]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozzdebnunkglvqjdfeuybuqfzqhbzsif ; /usr/bin/python3'
Nov 26 11:38:57 compute-0 sudo[81311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:38:57 compute-0 sudo[81276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:38:57 compute-0 sudo[81276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:57 compute-0 sudo[81276]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:57 compute-0 sudo[81321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-ebab460c-3fd7-5f66-aa87-e10c143123f7/etc/ceph/ceph.client.admin.keyring.new
Nov 26 11:38:57 compute-0 sudo[81321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:57 compute-0 sudo[81321]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:57 compute-0 python3[81318]: ansible-ansible.legacy.async_status Invoked with jid=j71392259781.80236 mode=cleanup _async_dir=/root/.ansible_async
Nov 26 11:38:57 compute-0 sudo[81311]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:57 compute-0 sudo[81369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:38:57 compute-0 sudo[81369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:57 compute-0 sudo[81369]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:57 compute-0 sudo[81394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-ebab460c-3fd7-5f66-aa87-e10c143123f7/etc/ceph/ceph.client.admin.keyring.new
Nov 26 11:38:57 compute-0 sudo[81394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:57 compute-0 sudo[81394]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:57 compute-0 sudo[81419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:38:57 compute-0 sudo[81419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:57 compute-0 sudo[81419]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:57 compute-0 sudo[81444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-ebab460c-3fd7-5f66-aa87-e10c143123f7/etc/ceph/ceph.client.admin.keyring.new
Nov 26 11:38:57 compute-0 sudo[81444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:57 compute-0 sudo[81444]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:57 compute-0 sudo[81469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:38:57 compute-0 sudo[81469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:57 compute-0 sudo[81469]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:57 compute-0 sudo[81494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-ebab460c-3fd7-5f66-aa87-e10c143123f7/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Nov 26 11:38:57 compute-0 sudo[81494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:57 compute-0 sudo[81494]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:57 compute-0 ceph-mgr[75197]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/config/ceph.client.admin.keyring
Nov 26 11:38:57 compute-0 ceph-mgr[75197]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/config/ceph.client.admin.keyring
Nov 26 11:38:57 compute-0 sudo[81547]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snycupbicmiksbqfstssxfsjlyugflyi ; /usr/bin/python3'
Nov 26 11:38:57 compute-0 sudo[81547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:38:57 compute-0 sudo[81539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:38:57 compute-0 sudo[81539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:57 compute-0 sudo[81539]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:57 compute-0 sudo[81570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/config
Nov 26 11:38:57 compute-0 sudo[81570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:57 compute-0 sudo[81570]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:57 compute-0 sudo[81595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:38:57 compute-0 sudo[81595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:57 compute-0 sudo[81595]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:57 compute-0 python3[81565]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 26 11:38:57 compute-0 sudo[81620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-ebab460c-3fd7-5f66-aa87-e10c143123f7/var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/config
Nov 26 11:38:57 compute-0 sudo[81620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:57 compute-0 sudo[81620]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:57 compute-0 sudo[81547]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:57 compute-0 sudo[81647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:38:57 compute-0 sudo[81647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:57 compute-0 sudo[81647]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:57 compute-0 sudo[81672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-ebab460c-3fd7-5f66-aa87-e10c143123f7/var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/config/ceph.client.admin.keyring.new
Nov 26 11:38:57 compute-0 sudo[81672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:57 compute-0 sudo[81672]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:58 compute-0 sudo[81697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:38:58 compute-0 sudo[81697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:58 compute-0 sudo[81697]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:58 compute-0 sudo[81722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-ebab460c-3fd7-5f66-aa87-e10c143123f7
Nov 26 11:38:58 compute-0 sudo[81722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:58 compute-0 sudo[81722]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:58 compute-0 sudo[81747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:38:58 compute-0 sudo[81747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:58 compute-0 sudo[81747]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:58 compute-0 sudo[81772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-ebab460c-3fd7-5f66-aa87-e10c143123f7/var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/config/ceph.client.admin.keyring.new
Nov 26 11:38:58 compute-0 sudo[81772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:58 compute-0 sudo[81772]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:58 compute-0 sudo[81818]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oisookrrtpafwoakefebmbemuwvsmabq ; /usr/bin/python3'
Nov 26 11:38:58 compute-0 sudo[81818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:38:58 compute-0 sudo[81846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:38:58 compute-0 sudo[81846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:58 compute-0 sudo[81846]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:58 compute-0 sudo[81871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-ebab460c-3fd7-5f66-aa87-e10c143123f7/var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/config/ceph.client.admin.keyring.new
Nov 26 11:38:58 compute-0 sudo[81871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:58 compute-0 python3[81822]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:38:58 compute-0 sudo[81871]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:58 compute-0 ceph-mon[74928]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 26 11:38:58 compute-0 podman[81896]: 2025-11-26 11:38:58.299476159 +0000 UTC m=+0.025941224 container create a29aad626544d08944900416eb7363fd903974b067ce0e132e5bfe16446c930e (image=quay.io/ceph/ceph:v18, name=pedantic_shannon, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Nov 26 11:38:58 compute-0 sudo[81897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:38:58 compute-0 sudo[81897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:58 compute-0 sudo[81897]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:58 compute-0 systemd[1]: Started libpod-conmon-a29aad626544d08944900416eb7363fd903974b067ce0e132e5bfe16446c930e.scope.
Nov 26 11:38:58 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:38:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9715da9d05193221dd22006f8e28622aa8da4c35b058b70b7aa5fd86c4df13f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9715da9d05193221dd22006f8e28622aa8da4c35b058b70b7aa5fd86c4df13f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9715da9d05193221dd22006f8e28622aa8da4c35b058b70b7aa5fd86c4df13f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:58 compute-0 podman[81896]: 2025-11-26 11:38:58.342505868 +0000 UTC m=+0.068970964 container init a29aad626544d08944900416eb7363fd903974b067ce0e132e5bfe16446c930e (image=quay.io/ceph/ceph:v18, name=pedantic_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 26 11:38:58 compute-0 podman[81896]: 2025-11-26 11:38:58.348686768 +0000 UTC m=+0.075151844 container start a29aad626544d08944900416eb7363fd903974b067ce0e132e5bfe16446c930e (image=quay.io/ceph/ceph:v18, name=pedantic_shannon, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:38:58 compute-0 podman[81896]: 2025-11-26 11:38:58.351995109 +0000 UTC m=+0.078460184 container attach a29aad626544d08944900416eb7363fd903974b067ce0e132e5bfe16446c930e (image=quay.io/ceph/ceph:v18, name=pedantic_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:38:58 compute-0 sudo[81934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-ebab460c-3fd7-5f66-aa87-e10c143123f7/var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/config/ceph.client.admin.keyring.new
Nov 26 11:38:58 compute-0 sudo[81934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:58 compute-0 sudo[81934]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:58 compute-0 podman[81896]: 2025-11-26 11:38:58.289790208 +0000 UTC m=+0.016255304 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:38:58 compute-0 sudo[81963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:38:58 compute-0 sudo[81963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:58 compute-0 sudo[81963]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:58 compute-0 sudo[81988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-ebab460c-3fd7-5f66-aa87-e10c143123f7/var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/config/ceph.client.admin.keyring.new /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/config/ceph.client.admin.keyring
Nov 26 11:38:58 compute-0 sudo[81988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:58 compute-0 sudo[81988]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:58 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 11:38:58 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:38:58 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 11:38:58 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:38:58 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 11:38:58 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:38:58 compute-0 ceph-mgr[75197]: [progress INFO root] update: starting ev 03583e69-dda8-4cde-b8c6-504eb29e063e (Updating crash deployment (+1 -> 1))
Nov 26 11:38:58 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Nov 26 11:38:58 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 26 11:38:58 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 26 11:38:58 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 11:38:58 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:38:58 compute-0 ceph-mgr[75197]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Nov 26 11:38:58 compute-0 ceph-mgr[75197]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Nov 26 11:38:58 compute-0 sudo[82013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:38:58 compute-0 sudo[82013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:58 compute-0 sudo[82013]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:58 compute-0 sudo[82038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:38:58 compute-0 sudo[82038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:58 compute-0 sudo[82038]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:58 compute-0 sudo[82063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:38:58 compute-0 sudo[82063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:58 compute-0 sudo[82063]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:58 compute-0 sudo[82088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7
Nov 26 11:38:58 compute-0 sudo[82088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:58 compute-0 ceph-mgr[75197]: log_channel(audit) log [DBG] : from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 26 11:38:58 compute-0 pedantic_shannon[81935]: 
Nov 26 11:38:58 compute-0 pedantic_shannon[81935]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 26 11:38:58 compute-0 systemd[1]: libpod-a29aad626544d08944900416eb7363fd903974b067ce0e132e5bfe16446c930e.scope: Deactivated successfully.
Nov 26 11:38:58 compute-0 podman[81896]: 2025-11-26 11:38:58.794765907 +0000 UTC m=+0.521230993 container died a29aad626544d08944900416eb7363fd903974b067ce0e132e5bfe16446c930e (image=quay.io/ceph/ceph:v18, name=pedantic_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 26 11:38:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9715da9d05193221dd22006f8e28622aa8da4c35b058b70b7aa5fd86c4df13f-merged.mount: Deactivated successfully.
Nov 26 11:38:58 compute-0 podman[81896]: 2025-11-26 11:38:58.820004266 +0000 UTC m=+0.546469341 container remove a29aad626544d08944900416eb7363fd903974b067ce0e132e5bfe16446c930e (image=quay.io/ceph/ceph:v18, name=pedantic_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:38:58 compute-0 systemd[1]: libpod-conmon-a29aad626544d08944900416eb7363fd903974b067ce0e132e5bfe16446c930e.scope: Deactivated successfully.
Nov 26 11:38:58 compute-0 sudo[81818]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:58 compute-0 podman[82178]: 2025-11-26 11:38:58.861599239 +0000 UTC m=+0.029206986 container create a2d2d97df5c91a963759057036bc3cab98cdd8fab736db9b884b3d30786784f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:38:58 compute-0 systemd[1]: Started libpod-conmon-a2d2d97df5c91a963759057036bc3cab98cdd8fab736db9b884b3d30786784f7.scope.
Nov 26 11:38:58 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:38:58 compute-0 podman[82178]: 2025-11-26 11:38:58.91779757 +0000 UTC m=+0.085405307 container init a2d2d97df5c91a963759057036bc3cab98cdd8fab736db9b884b3d30786784f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_jackson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:38:58 compute-0 podman[82178]: 2025-11-26 11:38:58.921498662 +0000 UTC m=+0.089106399 container start a2d2d97df5c91a963759057036bc3cab98cdd8fab736db9b884b3d30786784f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 26 11:38:58 compute-0 podman[82178]: 2025-11-26 11:38:58.922606681 +0000 UTC m=+0.090214418 container attach a2d2d97df5c91a963759057036bc3cab98cdd8fab736db9b884b3d30786784f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_jackson, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 26 11:38:58 compute-0 vigorous_jackson[82191]: 167 167
Nov 26 11:38:58 compute-0 systemd[1]: libpod-a2d2d97df5c91a963759057036bc3cab98cdd8fab736db9b884b3d30786784f7.scope: Deactivated successfully.
Nov 26 11:38:58 compute-0 podman[82178]: 2025-11-26 11:38:58.924760896 +0000 UTC m=+0.092368633 container died a2d2d97df5c91a963759057036bc3cab98cdd8fab736db9b884b3d30786784f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_jackson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 26 11:38:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-072281499c5848d17d23f12da546ee93cccf8693f28e5e08681912de872daa1c-merged.mount: Deactivated successfully.
Nov 26 11:38:58 compute-0 podman[82178]: 2025-11-26 11:38:58.94252782 +0000 UTC m=+0.110135557 container remove a2d2d97df5c91a963759057036bc3cab98cdd8fab736db9b884b3d30786784f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:38:58 compute-0 podman[82178]: 2025-11-26 11:38:58.850508027 +0000 UTC m=+0.018115775 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:38:58 compute-0 systemd[1]: libpod-conmon-a2d2d97df5c91a963759057036bc3cab98cdd8fab736db9b884b3d30786784f7.scope: Deactivated successfully.
Nov 26 11:38:58 compute-0 systemd[1]: Reloading.
Nov 26 11:38:59 compute-0 systemd-sysv-generator[82232]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:38:59 compute-0 systemd-rc-local-generator[82228]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:38:59 compute-0 sudo[82267]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmfpvmroguwhaeybmaapfwevdblkowtc ; /usr/bin/python3'
Nov 26 11:38:59 compute-0 sudo[82267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:38:59 compute-0 systemd[1]: Reloading.
Nov 26 11:38:59 compute-0 systemd-rc-local-generator[82297]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:38:59 compute-0 systemd-sysv-generator[82301]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:38:59 compute-0 python3[82271]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:38:59 compute-0 podman[82309]: 2025-11-26 11:38:59.321835313 +0000 UTC m=+0.028084138 container create 8be37a0cf78d2eeba0fad3a8aaf3bcfd23baf4a7f67ec446e438a8f445517d58 (image=quay.io/ceph/ceph:v18, name=nostalgic_ardinghelli, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Nov 26 11:38:59 compute-0 ceph-mgr[75197]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 26 11:38:59 compute-0 systemd[1]: Started libpod-conmon-8be37a0cf78d2eeba0fad3a8aaf3bcfd23baf4a7f67ec446e438a8f445517d58.scope.
Nov 26 11:38:59 compute-0 systemd[1]: Starting Ceph crash.compute-0 for ebab460c-3fd7-5f66-aa87-e10c143123f7...
Nov 26 11:38:59 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:38:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0114e7be7643562f0bd4e665e50fb3216cdf89870cf6595d35e5b94eecc7af00/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0114e7be7643562f0bd4e665e50fb3216cdf89870cf6595d35e5b94eecc7af00/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0114e7be7643562f0bd4e665e50fb3216cdf89870cf6595d35e5b94eecc7af00/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:59 compute-0 podman[82309]: 2025-11-26 11:38:59.406337676 +0000 UTC m=+0.112586520 container init 8be37a0cf78d2eeba0fad3a8aaf3bcfd23baf4a7f67ec446e438a8f445517d58 (image=quay.io/ceph/ceph:v18, name=nostalgic_ardinghelli, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 26 11:38:59 compute-0 podman[82309]: 2025-11-26 11:38:59.310316595 +0000 UTC m=+0.016565438 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:38:59 compute-0 podman[82309]: 2025-11-26 11:38:59.412380746 +0000 UTC m=+0.118629559 container start 8be37a0cf78d2eeba0fad3a8aaf3bcfd23baf4a7f67ec446e438a8f445517d58 (image=quay.io/ceph/ceph:v18, name=nostalgic_ardinghelli, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:38:59 compute-0 podman[82309]: 2025-11-26 11:38:59.413756261 +0000 UTC m=+0.120005105 container attach 8be37a0cf78d2eeba0fad3a8aaf3bcfd23baf4a7f67ec446e438a8f445517d58 (image=quay.io/ceph/ceph:v18, name=nostalgic_ardinghelli, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:38:59 compute-0 ceph-mon[74928]: Updating compute-0:/var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/config/ceph.client.admin.keyring
Nov 26 11:38:59 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:38:59 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:38:59 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:38:59 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 26 11:38:59 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 26 11:38:59 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:38:59 compute-0 ceph-mon[74928]: Deploying daemon crash.compute-0 on compute-0
Nov 26 11:38:59 compute-0 podman[82366]: 2025-11-26 11:38:59.543399495 +0000 UTC m=+0.028651519 container create 1abf78bcbb62a56f038e4c6376d87121bcc84ffdd8ba265647c3308e7f1dd346 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-crash-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 26 11:38:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae2cb606e91c356a7f1925eba74970490e0ff964eaeabe02377d1e4cc3658391/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae2cb606e91c356a7f1925eba74970490e0ff964eaeabe02377d1e4cc3658391/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae2cb606e91c356a7f1925eba74970490e0ff964eaeabe02377d1e4cc3658391/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae2cb606e91c356a7f1925eba74970490e0ff964eaeabe02377d1e4cc3658391/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:38:59 compute-0 podman[82366]: 2025-11-26 11:38:59.595923664 +0000 UTC m=+0.081175688 container init 1abf78bcbb62a56f038e4c6376d87121bcc84ffdd8ba265647c3308e7f1dd346 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-crash-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:38:59 compute-0 podman[82366]: 2025-11-26 11:38:59.599941103 +0000 UTC m=+0.085193127 container start 1abf78bcbb62a56f038e4c6376d87121bcc84ffdd8ba265647c3308e7f1dd346 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-crash-compute-0, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 26 11:38:59 compute-0 bash[82366]: 1abf78bcbb62a56f038e4c6376d87121bcc84ffdd8ba265647c3308e7f1dd346
Nov 26 11:38:59 compute-0 podman[82366]: 2025-11-26 11:38:59.530518015 +0000 UTC m=+0.015770049 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:38:59 compute-0 systemd[1]: Started Ceph crash.compute-0 for ebab460c-3fd7-5f66-aa87-e10c143123f7.
Nov 26 11:38:59 compute-0 sudo[82088]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:59 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 11:38:59 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:38:59 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 11:38:59 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:38:59 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 26 11:38:59 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:38:59 compute-0 ceph-mgr[75197]: [progress INFO root] complete: finished ev 03583e69-dda8-4cde-b8c6-504eb29e063e (Updating crash deployment (+1 -> 1))
Nov 26 11:38:59 compute-0 ceph-mgr[75197]: [progress INFO root] Completed event 03583e69-dda8-4cde-b8c6-504eb29e063e (Updating crash deployment (+1 -> 1)) in 1 seconds
Nov 26 11:38:59 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 26 11:38:59 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:38:59 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev a6f24e98-4d28-4bf8-86be-f19dd7ed4e7f does not exist
Nov 26 11:38:59 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 26 11:38:59 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:38:59 compute-0 ceph-mgr[75197]: [progress INFO root] update: starting ev 73dcdb27-83de-4da7-b356-7cd291216fbd (Updating mgr deployment (+1 -> 2))
Nov 26 11:38:59 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.zduqno", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Nov 26 11:38:59 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.zduqno", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 26 11:38:59 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.zduqno", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 26 11:38:59 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 26 11:38:59 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 26 11:38:59 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 11:38:59 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:38:59 compute-0 ceph-mgr[75197]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-0.zduqno on compute-0
Nov 26 11:38:59 compute-0 ceph-mgr[75197]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-0.zduqno on compute-0
Nov 26 11:38:59 compute-0 sudo[82384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:38:59 compute-0 sudo[82384]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:59 compute-0 sudo[82384]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:59 compute-0 sudo[82427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:38:59 compute-0 sudo[82427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:59 compute-0 sudo[82427]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:59 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-crash-compute-0[82378]: INFO:ceph-crash:pinging cluster to exercise our key
Nov 26 11:38:59 compute-0 sudo[82452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:38:59 compute-0 sudo[82452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:59 compute-0 sudo[82452]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:59 compute-0 sudo[82479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7
Nov 26 11:38:59 compute-0 sudo[82479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:38:59 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0) v1
Nov 26 11:38:59 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3798294139' entity='client.admin' 
Nov 26 11:38:59 compute-0 systemd[1]: libpod-8be37a0cf78d2eeba0fad3a8aaf3bcfd23baf4a7f67ec446e438a8f445517d58.scope: Deactivated successfully.
Nov 26 11:38:59 compute-0 podman[82309]: 2025-11-26 11:38:59.864520697 +0000 UTC m=+0.570769531 container died 8be37a0cf78d2eeba0fad3a8aaf3bcfd23baf4a7f67ec446e438a8f445517d58 (image=quay.io/ceph/ceph:v18, name=nostalgic_ardinghelli, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:38:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-0114e7be7643562f0bd4e665e50fb3216cdf89870cf6595d35e5b94eecc7af00-merged.mount: Deactivated successfully.
Nov 26 11:38:59 compute-0 podman[82309]: 2025-11-26 11:38:59.896005449 +0000 UTC m=+0.602254273 container remove 8be37a0cf78d2eeba0fad3a8aaf3bcfd23baf4a7f67ec446e438a8f445517d58 (image=quay.io/ceph/ceph:v18, name=nostalgic_ardinghelli, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Nov 26 11:38:59 compute-0 systemd[1]: libpod-conmon-8be37a0cf78d2eeba0fad3a8aaf3bcfd23baf4a7f67ec446e438a8f445517d58.scope: Deactivated successfully.
Nov 26 11:38:59 compute-0 sudo[82267]: pam_unix(sudo:session): session closed for user root
Nov 26 11:38:59 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-crash-compute-0[82378]: 2025-11-26T11:38:59.934+0000 7fc93902e640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 26 11:38:59 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-crash-compute-0[82378]: 2025-11-26T11:38:59.934+0000 7fc93902e640 -1 AuthRegistry(0x7fc934066fe0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 26 11:38:59 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-crash-compute-0[82378]: 2025-11-26T11:38:59.935+0000 7fc93902e640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 26 11:38:59 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-crash-compute-0[82378]: 2025-11-26T11:38:59.935+0000 7fc93902e640 -1 AuthRegistry(0x7fc93902d000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 26 11:38:59 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-crash-compute-0[82378]: 2025-11-26T11:38:59.940+0000 7fc932d76640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Nov 26 11:38:59 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-crash-compute-0[82378]: 2025-11-26T11:38:59.940+0000 7fc93902e640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Nov 26 11:38:59 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-crash-compute-0[82378]: [errno 13] RADOS permission denied (error connecting to the cluster)
Nov 26 11:38:59 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-crash-compute-0[82378]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Nov 26 11:39:00 compute-0 sudo[82561]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkcitrmflltwekdmfqzpobdabfcwmnaf ; /usr/bin/python3'
Nov 26 11:39:00 compute-0 sudo[82561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:39:00 compute-0 podman[82586]: 2025-11-26 11:39:00.097434739 +0000 UTC m=+0.026104474 container create d0546a37705fb26c611dcce289bb1df30a832fedd3d345a1e2289d556f3bad8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 26 11:39:00 compute-0 python3[82567]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:39:00 compute-0 systemd[1]: Started libpod-conmon-d0546a37705fb26c611dcce289bb1df30a832fedd3d345a1e2289d556f3bad8c.scope.
Nov 26 11:39:00 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:39:00 compute-0 podman[82586]: 2025-11-26 11:39:00.15192102 +0000 UTC m=+0.080590765 container init d0546a37705fb26c611dcce289bb1df30a832fedd3d345a1e2289d556f3bad8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_knuth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Nov 26 11:39:00 compute-0 podman[82586]: 2025-11-26 11:39:00.160091382 +0000 UTC m=+0.088761107 container start d0546a37705fb26c611dcce289bb1df30a832fedd3d345a1e2289d556f3bad8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_knuth, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:39:00 compute-0 podman[82586]: 2025-11-26 11:39:00.161999762 +0000 UTC m=+0.090669486 container attach d0546a37705fb26c611dcce289bb1df30a832fedd3d345a1e2289d556f3bad8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 26 11:39:00 compute-0 zealous_knuth[82599]: 167 167
Nov 26 11:39:00 compute-0 systemd[1]: libpod-d0546a37705fb26c611dcce289bb1df30a832fedd3d345a1e2289d556f3bad8c.scope: Deactivated successfully.
Nov 26 11:39:00 compute-0 podman[82586]: 2025-11-26 11:39:00.164122776 +0000 UTC m=+0.092792502 container died d0546a37705fb26c611dcce289bb1df30a832fedd3d345a1e2289d556f3bad8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:39:00 compute-0 podman[82600]: 2025-11-26 11:39:00.173145166 +0000 UTC m=+0.035554410 container create 4ce228d663331bc0d07e4d91d7a1a26406c5fd1fa3d2382a16020682bf817ce6 (image=quay.io/ceph/ceph:v18, name=flamboyant_chaum, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:39:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-406f6469acd0b485bb3a6b87159b9486989e834eeb99c2e4874dd859baf049e0-merged.mount: Deactivated successfully.
Nov 26 11:39:00 compute-0 podman[82586]: 2025-11-26 11:39:00.183158565 +0000 UTC m=+0.111828290 container remove d0546a37705fb26c611dcce289bb1df30a832fedd3d345a1e2289d556f3bad8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_knuth, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:39:00 compute-0 podman[82586]: 2025-11-26 11:39:00.086380576 +0000 UTC m=+0.015050321 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:39:00 compute-0 systemd[1]: libpod-conmon-d0546a37705fb26c611dcce289bb1df30a832fedd3d345a1e2289d556f3bad8c.scope: Deactivated successfully.
Nov 26 11:39:00 compute-0 systemd[1]: Started libpod-conmon-4ce228d663331bc0d07e4d91d7a1a26406c5fd1fa3d2382a16020682bf817ce6.scope.
Nov 26 11:39:00 compute-0 systemd[1]: Reloading.
Nov 26 11:39:00 compute-0 podman[82600]: 2025-11-26 11:39:00.156207656 +0000 UTC m=+0.018616919 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:39:00 compute-0 systemd-rc-local-generator[82653]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:39:00 compute-0 systemd-sysv-generator[82657]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:39:00 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:39:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02c8720f196bd976589fc93bd1c53f40f0a42eee563cb121cdf256254dc85724/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02c8720f196bd976589fc93bd1c53f40f0a42eee563cb121cdf256254dc85724/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02c8720f196bd976589fc93bd1c53f40f0a42eee563cb121cdf256254dc85724/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:00 compute-0 podman[82600]: 2025-11-26 11:39:00.429966125 +0000 UTC m=+0.292375379 container init 4ce228d663331bc0d07e4d91d7a1a26406c5fd1fa3d2382a16020682bf817ce6 (image=quay.io/ceph/ceph:v18, name=flamboyant_chaum, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:39:00 compute-0 podman[82600]: 2025-11-26 11:39:00.436251071 +0000 UTC m=+0.298660315 container start 4ce228d663331bc0d07e4d91d7a1a26406c5fd1fa3d2382a16020682bf817ce6 (image=quay.io/ceph/ceph:v18, name=flamboyant_chaum, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 26 11:39:00 compute-0 podman[82600]: 2025-11-26 11:39:00.438173246 +0000 UTC m=+0.300582510 container attach 4ce228d663331bc0d07e4d91d7a1a26406c5fd1fa3d2382a16020682bf817ce6 (image=quay.io/ceph/ceph:v18, name=flamboyant_chaum, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 26 11:39:00 compute-0 systemd[1]: Reloading.
Nov 26 11:39:00 compute-0 systemd-rc-local-generator[82694]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:39:00 compute-0 systemd-sysv-generator[82697]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:39:00 compute-0 ceph-mon[74928]: from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 26 11:39:00 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:00 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:00 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:00 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:00 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:00 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.zduqno", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 26 11:39:00 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.zduqno", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 26 11:39:00 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 26 11:39:00 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:39:00 compute-0 ceph-mon[74928]: Deploying daemon mgr.compute-0.zduqno on compute-0
Nov 26 11:39:00 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3798294139' entity='client.admin' 
Nov 26 11:39:00 compute-0 systemd[1]: Starting Ceph mgr.compute-0.zduqno for ebab460c-3fd7-5f66-aa87-e10c143123f7...
Nov 26 11:39:00 compute-0 podman[82768]: 2025-11-26 11:39:00.822322984 +0000 UTC m=+0.026929911 container create 38e7adac6c732c593f5c278c7f56020ee00f75c219c46b2385844bb0109a1556 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-zduqno, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:39:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8ae69ebad7b9ba04e2182a148dc039ffd1bc64c0af1747a94419475bd59fa74/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8ae69ebad7b9ba04e2182a148dc039ffd1bc64c0af1747a94419475bd59fa74/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8ae69ebad7b9ba04e2182a148dc039ffd1bc64c0af1747a94419475bd59fa74/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8ae69ebad7b9ba04e2182a148dc039ffd1bc64c0af1747a94419475bd59fa74/merged/var/lib/ceph/mgr/ceph-compute-0.zduqno supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:00 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0) v1
Nov 26 11:39:00 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3904795841' entity='client.admin' 
Nov 26 11:39:00 compute-0 podman[82768]: 2025-11-26 11:39:00.864929145 +0000 UTC m=+0.069536071 container init 38e7adac6c732c593f5c278c7f56020ee00f75c219c46b2385844bb0109a1556 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-zduqno, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 26 11:39:00 compute-0 podman[82768]: 2025-11-26 11:39:00.870384395 +0000 UTC m=+0.074991321 container start 38e7adac6c732c593f5c278c7f56020ee00f75c219c46b2385844bb0109a1556 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-zduqno, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:39:00 compute-0 bash[82768]: 38e7adac6c732c593f5c278c7f56020ee00f75c219c46b2385844bb0109a1556
Nov 26 11:39:00 compute-0 podman[82768]: 2025-11-26 11:39:00.810857034 +0000 UTC m=+0.015463981 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:39:00 compute-0 systemd[1]: Started Ceph mgr.compute-0.zduqno for ebab460c-3fd7-5f66-aa87-e10c143123f7.
Nov 26 11:39:00 compute-0 systemd[1]: libpod-4ce228d663331bc0d07e4d91d7a1a26406c5fd1fa3d2382a16020682bf817ce6.scope: Deactivated successfully.
Nov 26 11:39:00 compute-0 podman[82600]: 2025-11-26 11:39:00.880886977 +0000 UTC m=+0.743296221 container died 4ce228d663331bc0d07e4d91d7a1a26406c5fd1fa3d2382a16020682bf817ce6 (image=quay.io/ceph/ceph:v18, name=flamboyant_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:39:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-02c8720f196bd976589fc93bd1c53f40f0a42eee563cb121cdf256254dc85724-merged.mount: Deactivated successfully.
Nov 26 11:39:00 compute-0 ceph-mgr[82785]: set uid:gid to 167:167 (ceph:ceph)
Nov 26 11:39:00 compute-0 ceph-mgr[82785]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 26 11:39:00 compute-0 ceph-mgr[82785]: pidfile_write: ignore empty --pid-file
Nov 26 11:39:00 compute-0 sudo[82479]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:00 compute-0 podman[82600]: 2025-11-26 11:39:00.90902792 +0000 UTC m=+0.771437164 container remove 4ce228d663331bc0d07e4d91d7a1a26406c5fd1fa3d2382a16020682bf817ce6 (image=quay.io/ceph/ceph:v18, name=flamboyant_chaum, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 26 11:39:00 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 11:39:00 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:00 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 11:39:00 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:00 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 26 11:39:00 compute-0 systemd[1]: libpod-conmon-4ce228d663331bc0d07e4d91d7a1a26406c5fd1fa3d2382a16020682bf817ce6.scope: Deactivated successfully.
Nov 26 11:39:00 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:00 compute-0 ceph-mgr[75197]: [progress INFO root] complete: finished ev 73dcdb27-83de-4da7-b356-7cd291216fbd (Updating mgr deployment (+1 -> 2))
Nov 26 11:39:00 compute-0 ceph-mgr[75197]: [progress INFO root] Completed event 73dcdb27-83de-4da7-b356-7cd291216fbd (Updating mgr deployment (+1 -> 2)) in 1 seconds
Nov 26 11:39:00 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 26 11:39:00 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:00 compute-0 sudo[82561]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:00 compute-0 sudo[82819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:39:00 compute-0 sudo[82819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:00 compute-0 sudo[82819]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:01 compute-0 ceph-mgr[82785]: mgr[py] Loading python module 'alerts'
Nov 26 11:39:01 compute-0 sudo[82844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 26 11:39:01 compute-0 sudo[82844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:01 compute-0 sudo[82844]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:01 compute-0 ansible-async_wrapper.py[80315]: Done in kid B.
Nov 26 11:39:01 compute-0 sudo[82869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:39:01 compute-0 sudo[82869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:01 compute-0 sudo[82869]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:01 compute-0 sudo[82916]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-seusgxoinjzneknkupkhrhcyfzmqmfzq ; /usr/bin/python3'
Nov 26 11:39:01 compute-0 sudo[82916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:39:01 compute-0 sudo[82920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:39:01 compute-0 sudo[82920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:01 compute-0 sudo[82920]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:01 compute-0 sudo[82945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:39:01 compute-0 sudo[82945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:01 compute-0 sudo[82945]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:01 compute-0 python3[82919]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:39:01 compute-0 sudo[82970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 26 11:39:01 compute-0 sudo[82970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:01 compute-0 podman[82982]: 2025-11-26 11:39:01.21995822 +0000 UTC m=+0.030645089 container create 55228649f20a30d8fd8481820bd4ea7b3e17daface8c49aef11ee0a6f2b581fd (image=quay.io/ceph/ceph:v18, name=stupefied_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:39:01 compute-0 systemd[1]: Started libpod-conmon-55228649f20a30d8fd8481820bd4ea7b3e17daface8c49aef11ee0a6f2b581fd.scope.
Nov 26 11:39:01 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:39:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c14fd03538f2cccd693b2ed768998f478093c9d2f08422446940465993e9598a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c14fd03538f2cccd693b2ed768998f478093c9d2f08422446940465993e9598a/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c14fd03538f2cccd693b2ed768998f478093c9d2f08422446940465993e9598a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:01 compute-0 ceph-mgr[82785]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 26 11:39:01 compute-0 ceph-mgr[82785]: mgr[py] Loading python module 'balancer'
Nov 26 11:39:01 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-zduqno[82779]: 2025-11-26T11:39:01.275+0000 7f37daada140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 26 11:39:01 compute-0 podman[82982]: 2025-11-26 11:39:01.27881204 +0000 UTC m=+0.089498919 container init 55228649f20a30d8fd8481820bd4ea7b3e17daface8c49aef11ee0a6f2b581fd (image=quay.io/ceph/ceph:v18, name=stupefied_maxwell, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 26 11:39:01 compute-0 podman[82982]: 2025-11-26 11:39:01.286677146 +0000 UTC m=+0.097364015 container start 55228649f20a30d8fd8481820bd4ea7b3e17daface8c49aef11ee0a6f2b581fd (image=quay.io/ceph/ceph:v18, name=stupefied_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef)
Nov 26 11:39:01 compute-0 podman[82982]: 2025-11-26 11:39:01.287975575 +0000 UTC m=+0.098662454 container attach 55228649f20a30d8fd8481820bd4ea7b3e17daface8c49aef11ee0a6f2b581fd (image=quay.io/ceph/ceph:v18, name=stupefied_maxwell, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 26 11:39:01 compute-0 podman[82982]: 2025-11-26 11:39:01.20713486 +0000 UTC m=+0.017821749 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:39:01 compute-0 ceph-mgr[75197]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Nov 26 11:39:01 compute-0 ceph-mon[74928]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Nov 26 11:39:01 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 11:39:01 compute-0 ceph-mgr[75197]: [progress INFO root] Writing back 2 completed events
Nov 26 11:39:01 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 26 11:39:01 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:01 compute-0 ceph-mgr[82785]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 26 11:39:01 compute-0 ceph-mgr[82785]: mgr[py] Loading python module 'cephadm'
Nov 26 11:39:01 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-zduqno[82779]: 2025-11-26T11:39:01.497+0000 7f37daada140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 26 11:39:01 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:39:01 compute-0 podman[83068]: 2025-11-26 11:39:01.543682472 +0000 UTC m=+0.036494904 container exec 810eaed6cbde00330c0646cedaef5c2ae94579236d67ba42b8ef02d055d04ad5 (image=quay.io/ceph/ceph:v18, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:39:01 compute-0 podman[83068]: 2025-11-26 11:39:01.624757429 +0000 UTC m=+0.117569861 container exec_died 810eaed6cbde00330c0646cedaef5c2ae94579236d67ba42b8ef02d055d04ad5 (image=quay.io/ceph/ceph:v18, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mon-compute-0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:39:01 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0) v1
Nov 26 11:39:01 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/478030149' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Nov 26 11:39:01 compute-0 sudo[82970]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:01 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 11:39:01 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:01 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 11:39:01 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:01 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 11:39:01 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:39:01 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 11:39:01 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 11:39:01 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 11:39:01 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:01 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 632a0082-1dcc-408b-9dc5-f36979787daf does not exist
Nov 26 11:39:01 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 14d29a18-2c28-47e1-b164-27e673e4eb2b does not exist
Nov 26 11:39:01 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 41b96eba-b992-411c-a0e6-ce3684a119b1 does not exist
Nov 26 11:39:01 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3904795841' entity='client.admin' 
Nov 26 11:39:01 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:01 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:01 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:01 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:01 compute-0 ceph-mon[74928]: pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 11:39:01 compute-0 ceph-mon[74928]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Nov 26 11:39:01 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:01 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/478030149' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Nov 26 11:39:01 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:01 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:01 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:39:01 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 11:39:01 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:01 compute-0 sudo[83160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:39:01 compute-0 sudo[83160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:01 compute-0 sudo[83160]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:01 compute-0 sudo[83185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 26 11:39:01 compute-0 sudo[83185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:01 compute-0 sudo[83185]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:01 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0) v1
Nov 26 11:39:01 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:01 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0) v1
Nov 26 11:39:01 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:01 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0) v1
Nov 26 11:39:01 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:01 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0) v1
Nov 26 11:39:01 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:01 compute-0 ceph-mgr[75197]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Nov 26 11:39:01 compute-0 ceph-mgr[75197]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Nov 26 11:39:01 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Nov 26 11:39:01 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 26 11:39:01 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Nov 26 11:39:01 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 26 11:39:01 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 11:39:01 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:39:01 compute-0 ceph-mgr[75197]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Nov 26 11:39:01 compute-0 ceph-mgr[75197]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Nov 26 11:39:01 compute-0 sudo[83210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:39:01 compute-0 sudo[83210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:01 compute-0 sudo[83210]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:02 compute-0 sudo[83235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:39:02 compute-0 sudo[83235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:02 compute-0 sudo[83235]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:02 compute-0 sudo[83260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:39:02 compute-0 sudo[83260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:02 compute-0 sudo[83260]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:02 compute-0 sudo[83285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7
Nov 26 11:39:02 compute-0 sudo[83285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:02 compute-0 podman[83323]: 2025-11-26 11:39:02.281749758 +0000 UTC m=+0.030595375 container create 17075c4de41d4679f9a97707d4a456f4de60f384737affdef1b76e9dcce138da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_clarke, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:39:02 compute-0 systemd[1]: Started libpod-conmon-17075c4de41d4679f9a97707d4a456f4de60f384737affdef1b76e9dcce138da.scope.
Nov 26 11:39:02 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:39:02 compute-0 podman[83323]: 2025-11-26 11:39:02.329234802 +0000 UTC m=+0.078080419 container init 17075c4de41d4679f9a97707d4a456f4de60f384737affdef1b76e9dcce138da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_clarke, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 26 11:39:02 compute-0 podman[83323]: 2025-11-26 11:39:02.33370293 +0000 UTC m=+0.082548537 container start 17075c4de41d4679f9a97707d4a456f4de60f384737affdef1b76e9dcce138da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_clarke, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 26 11:39:02 compute-0 podman[83323]: 2025-11-26 11:39:02.334879339 +0000 UTC m=+0.083724966 container attach 17075c4de41d4679f9a97707d4a456f4de60f384737affdef1b76e9dcce138da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_clarke, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:39:02 compute-0 sleepy_clarke[83336]: 167 167
Nov 26 11:39:02 compute-0 systemd[1]: libpod-17075c4de41d4679f9a97707d4a456f4de60f384737affdef1b76e9dcce138da.scope: Deactivated successfully.
Nov 26 11:39:02 compute-0 podman[83323]: 2025-11-26 11:39:02.337521884 +0000 UTC m=+0.086367491 container died 17075c4de41d4679f9a97707d4a456f4de60f384737affdef1b76e9dcce138da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_clarke, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 26 11:39:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e39a30422424943c4169076d59a5b7b490f3a0336ad711cabb4b32d2fff6c43-merged.mount: Deactivated successfully.
Nov 26 11:39:02 compute-0 podman[83323]: 2025-11-26 11:39:02.362604809 +0000 UTC m=+0.111450416 container remove 17075c4de41d4679f9a97707d4a456f4de60f384737affdef1b76e9dcce138da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 26 11:39:02 compute-0 podman[83323]: 2025-11-26 11:39:02.270406801 +0000 UTC m=+0.019252428 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:39:02 compute-0 systemd[1]: libpod-conmon-17075c4de41d4679f9a97707d4a456f4de60f384737affdef1b76e9dcce138da.scope: Deactivated successfully.
Nov 26 11:39:02 compute-0 sudo[83285]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:02 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 11:39:02 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:02 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 11:39:02 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:02 compute-0 ceph-mgr[75197]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.mwrktr (unknown last config time)...
Nov 26 11:39:02 compute-0 ceph-mgr[75197]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.mwrktr (unknown last config time)...
Nov 26 11:39:02 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.mwrktr", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Nov 26 11:39:02 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.mwrktr", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 26 11:39:02 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 26 11:39:02 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 26 11:39:02 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 11:39:02 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:39:02 compute-0 ceph-mgr[75197]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.mwrktr on compute-0
Nov 26 11:39:02 compute-0 ceph-mgr[75197]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.mwrktr on compute-0
Nov 26 11:39:02 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Nov 26 11:39:02 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 26 11:39:02 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/478030149' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Nov 26 11:39:02 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Nov 26 11:39:02 compute-0 stupefied_maxwell[83007]: set require_min_compat_client to mimic
Nov 26 11:39:02 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Nov 26 11:39:02 compute-0 systemd[1]: libpod-55228649f20a30d8fd8481820bd4ea7b3e17daface8c49aef11ee0a6f2b581fd.scope: Deactivated successfully.
Nov 26 11:39:02 compute-0 podman[82982]: 2025-11-26 11:39:02.428538645 +0000 UTC m=+1.239225534 container died 55228649f20a30d8fd8481820bd4ea7b3e17daface8c49aef11ee0a6f2b581fd (image=quay.io/ceph/ceph:v18, name=stupefied_maxwell, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 26 11:39:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-c14fd03538f2cccd693b2ed768998f478093c9d2f08422446940465993e9598a-merged.mount: Deactivated successfully.
Nov 26 11:39:02 compute-0 sudo[83352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:39:02 compute-0 sudo[83352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:02 compute-0 podman[82982]: 2025-11-26 11:39:02.456605539 +0000 UTC m=+1.267292408 container remove 55228649f20a30d8fd8481820bd4ea7b3e17daface8c49aef11ee0a6f2b581fd (image=quay.io/ceph/ceph:v18, name=stupefied_maxwell, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 26 11:39:02 compute-0 sudo[83352]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:02 compute-0 systemd[1]: libpod-conmon-55228649f20a30d8fd8481820bd4ea7b3e17daface8c49aef11ee0a6f2b581fd.scope: Deactivated successfully.
Nov 26 11:39:02 compute-0 sudo[82916]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:02 compute-0 sudo[83387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:39:02 compute-0 sudo[83387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:02 compute-0 sudo[83387]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:02 compute-0 sudo[83412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:39:02 compute-0 sudo[83412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:02 compute-0 sudo[83412]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:02 compute-0 sudo[83437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7
Nov 26 11:39:02 compute-0 sudo[83437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:02 compute-0 podman[83487]: 2025-11-26 11:39:02.76717644 +0000 UTC m=+0.032166629 container create d156bae5c860ad16865836968c1694885fa71e6a5972e400722f0d665f105ccb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_heisenberg, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Nov 26 11:39:02 compute-0 systemd[1]: Started libpod-conmon-d156bae5c860ad16865836968c1694885fa71e6a5972e400722f0d665f105ccb.scope.
Nov 26 11:39:02 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:39:02 compute-0 podman[83487]: 2025-11-26 11:39:02.808579891 +0000 UTC m=+0.073570080 container init d156bae5c860ad16865836968c1694885fa71e6a5972e400722f0d665f105ccb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_heisenberg, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:39:02 compute-0 podman[83487]: 2025-11-26 11:39:02.812528821 +0000 UTC m=+0.077519000 container start d156bae5c860ad16865836968c1694885fa71e6a5972e400722f0d665f105ccb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 26 11:39:02 compute-0 epic_heisenberg[83508]: 167 167
Nov 26 11:39:02 compute-0 podman[83487]: 2025-11-26 11:39:02.815322471 +0000 UTC m=+0.080312650 container attach d156bae5c860ad16865836968c1694885fa71e6a5972e400722f0d665f105ccb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_heisenberg, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:39:02 compute-0 systemd[1]: libpod-d156bae5c860ad16865836968c1694885fa71e6a5972e400722f0d665f105ccb.scope: Deactivated successfully.
Nov 26 11:39:02 compute-0 podman[83487]: 2025-11-26 11:39:02.816423017 +0000 UTC m=+0.081413196 container died d156bae5c860ad16865836968c1694885fa71e6a5972e400722f0d665f105ccb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_heisenberg, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Nov 26 11:39:02 compute-0 sudo[83526]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muampmetnvfrmixaxdtjnkgzgxpvmiix ; /usr/bin/python3'
Nov 26 11:39:02 compute-0 sudo[83526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:39:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-36cc46a35af35e9cf6c429857795ccfa18a47d415e9daca9b94aa12544938769-merged.mount: Deactivated successfully.
Nov 26 11:39:02 compute-0 podman[83487]: 2025-11-26 11:39:02.837027084 +0000 UTC m=+0.102017262 container remove d156bae5c860ad16865836968c1694885fa71e6a5972e400722f0d665f105ccb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_heisenberg, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 26 11:39:02 compute-0 podman[83487]: 2025-11-26 11:39:02.750720497 +0000 UTC m=+0.015710686 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:39:02 compute-0 systemd[1]: libpod-conmon-d156bae5c860ad16865836968c1694885fa71e6a5972e400722f0d665f105ccb.scope: Deactivated successfully.
Nov 26 11:39:02 compute-0 sudo[83437]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:02 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 11:39:02 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:02 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 11:39:02 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:02 compute-0 sudo[83542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:39:02 compute-0 sudo[83542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:02 compute-0 sudo[83542]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:02 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:02 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:02 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:02 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:02 compute-0 ceph-mon[74928]: Reconfiguring mon.compute-0 (unknown last config time)...
Nov 26 11:39:02 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 26 11:39:02 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 26 11:39:02 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:39:02 compute-0 ceph-mon[74928]: Reconfiguring daemon mon.compute-0 on compute-0
Nov 26 11:39:02 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:02 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:02 compute-0 ceph-mon[74928]: Reconfiguring mgr.compute-0.mwrktr (unknown last config time)...
Nov 26 11:39:02 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.mwrktr", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 26 11:39:02 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 26 11:39:02 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:39:02 compute-0 ceph-mon[74928]: Reconfiguring daemon mgr.compute-0.mwrktr on compute-0
Nov 26 11:39:02 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/478030149' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Nov 26 11:39:02 compute-0 ceph-mon[74928]: osdmap e3: 0 total, 0 up, 0 in
Nov 26 11:39:02 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:02 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:02 compute-0 python3[83532]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:39:02 compute-0 sudo[83567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:39:02 compute-0 sudo[83567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:02 compute-0 sudo[83567]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:02 compute-0 podman[83585]: 2025-11-26 11:39:02.995085139 +0000 UTC m=+0.030834787 container create be6519674dffeba9842d61b97c637d31ac593525ebb26a70196f5e39cfc97ae4 (image=quay.io/ceph/ceph:v18, name=agitated_bell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:39:03 compute-0 systemd[1]: Started libpod-conmon-be6519674dffeba9842d61b97c637d31ac593525ebb26a70196f5e39cfc97ae4.scope.
Nov 26 11:39:03 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:39:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0995e86bdd30e5d79f9fc16f15eb471277af5816abc5c91643f362c3daf8d787/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0995e86bdd30e5d79f9fc16f15eb471277af5816abc5c91643f362c3daf8d787/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0995e86bdd30e5d79f9fc16f15eb471277af5816abc5c91643f362c3daf8d787/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:03 compute-0 sudo[83600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:39:03 compute-0 sudo[83600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:03 compute-0 sudo[83600]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:03 compute-0 podman[83585]: 2025-11-26 11:39:03.044480498 +0000 UTC m=+0.080230166 container init be6519674dffeba9842d61b97c637d31ac593525ebb26a70196f5e39cfc97ae4 (image=quay.io/ceph/ceph:v18, name=agitated_bell, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 26 11:39:03 compute-0 podman[83585]: 2025-11-26 11:39:03.048981829 +0000 UTC m=+0.084731486 container start be6519674dffeba9842d61b97c637d31ac593525ebb26a70196f5e39cfc97ae4 (image=quay.io/ceph/ceph:v18, name=agitated_bell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:39:03 compute-0 podman[83585]: 2025-11-26 11:39:03.050171963 +0000 UTC m=+0.085921622 container attach be6519674dffeba9842d61b97c637d31ac593525ebb26a70196f5e39cfc97ae4 (image=quay.io/ceph/ceph:v18, name=agitated_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:39:03 compute-0 podman[83585]: 2025-11-26 11:39:02.982175627 +0000 UTC m=+0.017925305 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:39:03 compute-0 sudo[83633]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 26 11:39:03 compute-0 sudo[83633]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:03 compute-0 ceph-mgr[82785]: mgr[py] Loading python module 'crash'
Nov 26 11:39:03 compute-0 ceph-mgr[82785]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 26 11:39:03 compute-0 ceph-mgr[82785]: mgr[py] Loading python module 'dashboard'
Nov 26 11:39:03 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-zduqno[82779]: 2025-11-26T11:39:03.360+0000 7f37daada140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 26 11:39:03 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 11:39:03 compute-0 podman[83734]: 2025-11-26 11:39:03.436701579 +0000 UTC m=+0.045012360 container exec 810eaed6cbde00330c0646cedaef5c2ae94579236d67ba42b8ef02d055d04ad5 (image=quay.io/ceph/ceph:v18, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mon-compute-0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:39:03 compute-0 ceph-mgr[75197]: log_channel(audit) log [DBG] : from='client.14186 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:39:03 compute-0 sudo[83752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:39:03 compute-0 sudo[83752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:03 compute-0 sudo[83752]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:03 compute-0 podman[83769]: 2025-11-26 11:39:03.566701828 +0000 UTC m=+0.048375154 container exec_died 810eaed6cbde00330c0646cedaef5c2ae94579236d67ba42b8ef02d055d04ad5 (image=quay.io/ceph/ceph:v18, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 26 11:39:03 compute-0 podman[83734]: 2025-11-26 11:39:03.570747138 +0000 UTC m=+0.179057909 container exec_died 810eaed6cbde00330c0646cedaef5c2ae94579236d67ba42b8ef02d055d04ad5 (image=quay.io/ceph/ceph:v18, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:39:03 compute-0 sudo[83785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:39:03 compute-0 sudo[83785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:03 compute-0 sudo[83785]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:03 compute-0 sudo[83812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:39:03 compute-0 sudo[83812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:03 compute-0 sudo[83812]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:03 compute-0 sudo[83853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Nov 26 11:39:03 compute-0 sudo[83853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:03 compute-0 sudo[83633]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:03 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 11:39:03 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:03 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 11:39:03 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:03 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 11:39:03 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:39:03 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 11:39:03 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 11:39:03 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 11:39:03 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:03 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev a36e3f67-2434-4960-9091-f8c578487221 does not exist
Nov 26 11:39:03 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev b58d217a-f99c-4ebe-a5a8-f8718ae10466 does not exist
Nov 26 11:39:03 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 3f9f23c3-7e01-4953-aa5e-d42f1b49ad63 does not exist
Nov 26 11:39:03 compute-0 sudo[83902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:39:03 compute-0 sudo[83902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:03 compute-0 sudo[83902]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:03 compute-0 sudo[83939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 26 11:39:03 compute-0 sudo[83939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:03 compute-0 sudo[83939]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:03 compute-0 sudo[83853]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:03 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 26 11:39:03 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:03 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 26 11:39:03 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:03 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 26 11:39:03 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:03 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 26 11:39:03 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:03 compute-0 ceph-mgr[75197]: [cephadm INFO root] Added host compute-0
Nov 26 11:39:03 compute-0 ceph-mgr[75197]: log_channel(cephadm) log [INF] : Added host compute-0
Nov 26 11:39:03 compute-0 ceph-mgr[75197]: [cephadm INFO root] Saving service mon spec with placement compute-0
Nov 26 11:39:03 compute-0 ceph-mgr[75197]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0
Nov 26 11:39:03 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 26 11:39:03 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 11:39:03 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:39:03 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:03 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 11:39:03 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 11:39:03 compute-0 ceph-mgr[75197]: [cephadm INFO root] Saving service mgr spec with placement compute-0
Nov 26 11:39:03 compute-0 ceph-mgr[75197]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0
Nov 26 11:39:03 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 26 11:39:03 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:03 compute-0 ceph-mgr[75197]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Nov 26 11:39:03 compute-0 ceph-mgr[75197]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Nov 26 11:39:03 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 11:39:03 compute-0 ceph-mgr[75197]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0
Nov 26 11:39:03 compute-0 ceph-mgr[75197]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0
Nov 26 11:39:03 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0) v1
Nov 26 11:39:03 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:03 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev ac0680cc-03c4-48a5-ab08-d9b11ba5cf71 does not exist
Nov 26 11:39:03 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 26 11:39:03 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:03 compute-0 agitated_bell[83624]: Added host 'compute-0' with addr '192.168.122.100'
Nov 26 11:39:03 compute-0 agitated_bell[83624]: Scheduled mon update...
Nov 26 11:39:03 compute-0 agitated_bell[83624]: Scheduled mgr update...
Nov 26 11:39:03 compute-0 agitated_bell[83624]: Scheduled osd.default_drive_group update...
Nov 26 11:39:03 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:03 compute-0 ceph-mgr[75197]: [progress INFO root] update: starting ev bceec8ba-d0c1-4427-9b60-ef30c39d4602 (Updating mgr deployment (-1 -> 1))
Nov 26 11:39:03 compute-0 ceph-mgr[75197]: [cephadm INFO cephadm.serve] Removing daemon mgr.compute-0.zduqno from compute-0 -- ports [8765]
Nov 26 11:39:03 compute-0 ceph-mgr[75197]: log_channel(cephadm) log [INF] : Removing daemon mgr.compute-0.zduqno from compute-0 -- ports [8765]
Nov 26 11:39:03 compute-0 systemd[1]: libpod-be6519674dffeba9842d61b97c637d31ac593525ebb26a70196f5e39cfc97ae4.scope: Deactivated successfully.
Nov 26 11:39:03 compute-0 podman[83585]: 2025-11-26 11:39:03.930798178 +0000 UTC m=+0.966547846 container died be6519674dffeba9842d61b97c637d31ac593525ebb26a70196f5e39cfc97ae4 (image=quay.io/ceph/ceph:v18, name=agitated_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 26 11:39:03 compute-0 ceph-mon[74928]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 11:39:03 compute-0 ceph-mon[74928]: from='client.14186 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:39:03 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:03 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:03 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:39:03 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 11:39:03 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:03 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:03 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:03 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:03 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:03 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:39:03 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:03 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 11:39:03 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:03 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:03 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:03 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-0995e86bdd30e5d79f9fc16f15eb471277af5816abc5c91643f362c3daf8d787-merged.mount: Deactivated successfully.
Nov 26 11:39:03 compute-0 podman[83585]: 2025-11-26 11:39:03.959136013 +0000 UTC m=+0.994885671 container remove be6519674dffeba9842d61b97c637d31ac593525ebb26a70196f5e39cfc97ae4 (image=quay.io/ceph/ceph:v18, name=agitated_bell, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:39:03 compute-0 systemd[1]: libpod-conmon-be6519674dffeba9842d61b97c637d31ac593525ebb26a70196f5e39cfc97ae4.scope: Deactivated successfully.
Nov 26 11:39:03 compute-0 sudo[83526]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:03 compute-0 sudo[83971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:39:03 compute-0 sudo[83971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:03 compute-0 sudo[83971]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:04 compute-0 sudo[84006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:39:04 compute-0 sudo[84006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:04 compute-0 sudo[84006]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:04 compute-0 sudo[84031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:39:04 compute-0 sudo[84031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:04 compute-0 sudo[84031]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:04 compute-0 sudo[84056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 rm-daemon --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 --name mgr.compute-0.zduqno --force --tcp-ports 8765
Nov 26 11:39:04 compute-0 sudo[84056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:04 compute-0 sudo[84104]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbthxxaanezixihgsoqzsmwvmcaeizbv ; /usr/bin/python3'
Nov 26 11:39:04 compute-0 sudo[84104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:39:04 compute-0 python3[84106]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:39:04 compute-0 podman[84131]: 2025-11-26 11:39:04.310788839 +0000 UTC m=+0.042407256 container create d8b7f73f4bb14281b326e3c324a4623840dce51d52f372d73e7cad3c59dc937a (image=quay.io/ceph/ceph:v18, name=mystifying_cori, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:39:04 compute-0 systemd[1]: Stopping Ceph mgr.compute-0.zduqno for ebab460c-3fd7-5f66-aa87-e10c143123f7...
Nov 26 11:39:04 compute-0 systemd[1]: Started libpod-conmon-d8b7f73f4bb14281b326e3c324a4623840dce51d52f372d73e7cad3c59dc937a.scope.
Nov 26 11:39:04 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:39:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa6af9da852ed0cc53786c1eed7032285db7da9c527f299c2ac1374fa3d7604a/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa6af9da852ed0cc53786c1eed7032285db7da9c527f299c2ac1374fa3d7604a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa6af9da852ed0cc53786c1eed7032285db7da9c527f299c2ac1374fa3d7604a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:04 compute-0 podman[84131]: 2025-11-26 11:39:04.373118255 +0000 UTC m=+0.104736682 container init d8b7f73f4bb14281b326e3c324a4623840dce51d52f372d73e7cad3c59dc937a (image=quay.io/ceph/ceph:v18, name=mystifying_cori, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 26 11:39:04 compute-0 podman[84131]: 2025-11-26 11:39:04.383792331 +0000 UTC m=+0.115410738 container start d8b7f73f4bb14281b326e3c324a4623840dce51d52f372d73e7cad3c59dc937a (image=quay.io/ceph/ceph:v18, name=mystifying_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:39:04 compute-0 podman[84131]: 2025-11-26 11:39:04.290234928 +0000 UTC m=+0.021853334 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:39:04 compute-0 podman[84131]: 2025-11-26 11:39:04.386716356 +0000 UTC m=+0.118334784 container attach d8b7f73f4bb14281b326e3c324a4623840dce51d52f372d73e7cad3c59dc937a (image=quay.io/ceph/ceph:v18, name=mystifying_cori, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:39:04 compute-0 podman[84178]: 2025-11-26 11:39:04.489437667 +0000 UTC m=+0.044067589 container died 38e7adac6c732c593f5c278c7f56020ee00f75c219c46b2385844bb0109a1556 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-zduqno, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:39:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-a8ae69ebad7b9ba04e2182a148dc039ffd1bc64c0af1747a94419475bd59fa74-merged.mount: Deactivated successfully.
Nov 26 11:39:04 compute-0 podman[84178]: 2025-11-26 11:39:04.513340606 +0000 UTC m=+0.067970519 container remove 38e7adac6c732c593f5c278c7f56020ee00f75c219c46b2385844bb0109a1556 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-zduqno, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 26 11:39:04 compute-0 bash[84178]: ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-zduqno
Nov 26 11:39:04 compute-0 systemd[1]: ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7@mgr.compute-0.zduqno.service: Main process exited, code=exited, status=143/n/a
Nov 26 11:39:04 compute-0 systemd[1]: ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7@mgr.compute-0.zduqno.service: Failed with result 'exit-code'.
Nov 26 11:39:04 compute-0 systemd[1]: Stopped Ceph mgr.compute-0.zduqno for ebab460c-3fd7-5f66-aa87-e10c143123f7.
Nov 26 11:39:04 compute-0 systemd[1]: ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7@mgr.compute-0.zduqno.service: Consumed 4.028s CPU time.
Nov 26 11:39:04 compute-0 systemd[1]: Reloading.
Nov 26 11:39:04 compute-0 systemd-sysv-generator[84273]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:39:04 compute-0 systemd-rc-local-generator[84264]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:39:04 compute-0 sudo[84056]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:04 compute-0 ceph-mgr[75197]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.compute-0.zduqno
Nov 26 11:39:04 compute-0 ceph-mgr[75197]: log_channel(cephadm) log [INF] : Removing key for mgr.compute-0.zduqno
Nov 26 11:39:04 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.compute-0.zduqno"} v 0) v1
Nov 26 11:39:04 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.zduqno"}]: dispatch
Nov 26 11:39:04 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.zduqno"}]': finished
Nov 26 11:39:04 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 26 11:39:04 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:04 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 26 11:39:04 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2819785406' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 26 11:39:04 compute-0 ceph-mgr[75197]: [progress INFO root] complete: finished ev bceec8ba-d0c1-4427-9b60-ef30c39d4602 (Updating mgr deployment (-1 -> 1))
Nov 26 11:39:04 compute-0 ceph-mgr[75197]: [progress INFO root] Completed event bceec8ba-d0c1-4427-9b60-ef30c39d4602 (Updating mgr deployment (-1 -> 1)) in 1 seconds
Nov 26 11:39:04 compute-0 mystifying_cori[84161]: 
Nov 26 11:39:04 compute-0 mystifying_cori[84161]: {"fsid":"ebab460c-3fd7-5f66-aa87-e10c143123f7","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":63,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-11-26T11:37:59.453002+0000","services":{}},"progress_events":{}}
Nov 26 11:39:04 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 26 11:39:04 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:04 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev c02a735c-b3c5-4aec-8ec5-e3f3d3d9a959 does not exist
Nov 26 11:39:04 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 11:39:04 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 11:39:04 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 11:39:04 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 11:39:04 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 11:39:04 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:39:04 compute-0 podman[84131]: 2025-11-26 11:39:04.884266629 +0000 UTC m=+0.615885036 container died d8b7f73f4bb14281b326e3c324a4623840dce51d52f372d73e7cad3c59dc937a (image=quay.io/ceph/ceph:v18, name=mystifying_cori, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 26 11:39:04 compute-0 systemd[1]: libpod-d8b7f73f4bb14281b326e3c324a4623840dce51d52f372d73e7cad3c59dc937a.scope: Deactivated successfully.
Nov 26 11:39:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa6af9da852ed0cc53786c1eed7032285db7da9c527f299c2ac1374fa3d7604a-merged.mount: Deactivated successfully.
Nov 26 11:39:04 compute-0 podman[84131]: 2025-11-26 11:39:04.909062713 +0000 UTC m=+0.640681121 container remove d8b7f73f4bb14281b326e3c324a4623840dce51d52f372d73e7cad3c59dc937a (image=quay.io/ceph/ceph:v18, name=mystifying_cori, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:39:04 compute-0 systemd[1]: libpod-conmon-d8b7f73f4bb14281b326e3c324a4623840dce51d52f372d73e7cad3c59dc937a.scope: Deactivated successfully.
Nov 26 11:39:04 compute-0 sudo[84281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:39:04 compute-0 sudo[84104]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:04 compute-0 sudo[84281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:04 compute-0 sudo[84281]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:04 compute-0 ceph-mon[74928]: Added host compute-0
Nov 26 11:39:04 compute-0 ceph-mon[74928]: Saving service mon spec with placement compute-0
Nov 26 11:39:04 compute-0 ceph-mon[74928]: Saving service mgr spec with placement compute-0
Nov 26 11:39:04 compute-0 ceph-mon[74928]: Marking host: compute-0 for OSDSpec preview refresh.
Nov 26 11:39:04 compute-0 ceph-mon[74928]: Saving service osd.default_drive_group spec with placement compute-0
Nov 26 11:39:04 compute-0 ceph-mon[74928]: Removing daemon mgr.compute-0.zduqno from compute-0 -- ports [8765]
Nov 26 11:39:04 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.zduqno"}]: dispatch
Nov 26 11:39:04 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.zduqno"}]': finished
Nov 26 11:39:04 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:04 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/2819785406' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 26 11:39:04 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:04 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 11:39:04 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 11:39:04 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:39:04 compute-0 sudo[84314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:39:04 compute-0 sudo[84314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:04 compute-0 sudo[84314]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:05 compute-0 sudo[84339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:39:05 compute-0 sudo[84339]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:05 compute-0 sudo[84339]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:05 compute-0 sudo[84364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 26 11:39:05 compute-0 sudo[84364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:05 compute-0 podman[84419]: 2025-11-26 11:39:05.260675824 +0000 UTC m=+0.025406527 container create 07265261c338ebfe1a8b714706d6c55c1edab1722ae9e7d5cfdbb24445c16dce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_leakey, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:39:05 compute-0 systemd[1]: Started libpod-conmon-07265261c338ebfe1a8b714706d6c55c1edab1722ae9e7d5cfdbb24445c16dce.scope.
Nov 26 11:39:05 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:39:05 compute-0 podman[84419]: 2025-11-26 11:39:05.315159481 +0000 UTC m=+0.079890202 container init 07265261c338ebfe1a8b714706d6c55c1edab1722ae9e7d5cfdbb24445c16dce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 26 11:39:05 compute-0 podman[84419]: 2025-11-26 11:39:05.319478047 +0000 UTC m=+0.084208759 container start 07265261c338ebfe1a8b714706d6c55c1edab1722ae9e7d5cfdbb24445c16dce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_leakey, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:39:05 compute-0 podman[84419]: 2025-11-26 11:39:05.320560078 +0000 UTC m=+0.085290780 container attach 07265261c338ebfe1a8b714706d6c55c1edab1722ae9e7d5cfdbb24445c16dce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_leakey, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 26 11:39:05 compute-0 heuristic_leakey[84432]: 167 167
Nov 26 11:39:05 compute-0 systemd[1]: libpod-07265261c338ebfe1a8b714706d6c55c1edab1722ae9e7d5cfdbb24445c16dce.scope: Deactivated successfully.
Nov 26 11:39:05 compute-0 podman[84419]: 2025-11-26 11:39:05.3231743 +0000 UTC m=+0.087905012 container died 07265261c338ebfe1a8b714706d6c55c1edab1722ae9e7d5cfdbb24445c16dce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_leakey, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 26 11:39:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-fafc6172488a4a3a8c8d4e0f7f2389a4585eb601e99eb36a085bce5a2ade10b0-merged.mount: Deactivated successfully.
Nov 26 11:39:05 compute-0 podman[84419]: 2025-11-26 11:39:05.345049164 +0000 UTC m=+0.109779866 container remove 07265261c338ebfe1a8b714706d6c55c1edab1722ae9e7d5cfdbb24445c16dce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_leakey, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 26 11:39:05 compute-0 podman[84419]: 2025-11-26 11:39:05.2500975 +0000 UTC m=+0.014828222 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:39:05 compute-0 systemd[1]: libpod-conmon-07265261c338ebfe1a8b714706d6c55c1edab1722ae9e7d5cfdbb24445c16dce.scope: Deactivated successfully.
Nov 26 11:39:05 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 11:39:05 compute-0 podman[84454]: 2025-11-26 11:39:05.450620249 +0000 UTC m=+0.025502388 container create 90eb334e12824738f79c63271ee0d87e2b845dd5bfe0bea7b78c459da15bdc2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_kilby, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:39:05 compute-0 systemd[1]: Started libpod-conmon-90eb334e12824738f79c63271ee0d87e2b845dd5bfe0bea7b78c459da15bdc2b.scope.
Nov 26 11:39:05 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:39:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dddb69942005c67c30f91527355d981bf004472b26309c2b347ef1e8758b3236/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dddb69942005c67c30f91527355d981bf004472b26309c2b347ef1e8758b3236/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dddb69942005c67c30f91527355d981bf004472b26309c2b347ef1e8758b3236/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dddb69942005c67c30f91527355d981bf004472b26309c2b347ef1e8758b3236/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dddb69942005c67c30f91527355d981bf004472b26309c2b347ef1e8758b3236/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:05 compute-0 podman[84454]: 2025-11-26 11:39:05.503184084 +0000 UTC m=+0.078066222 container init 90eb334e12824738f79c63271ee0d87e2b845dd5bfe0bea7b78c459da15bdc2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_kilby, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 26 11:39:05 compute-0 podman[84454]: 2025-11-26 11:39:05.508487478 +0000 UTC m=+0.083369616 container start 90eb334e12824738f79c63271ee0d87e2b845dd5bfe0bea7b78c459da15bdc2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:39:05 compute-0 podman[84454]: 2025-11-26 11:39:05.50955394 +0000 UTC m=+0.084436078 container attach 90eb334e12824738f79c63271ee0d87e2b845dd5bfe0bea7b78c459da15bdc2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 26 11:39:05 compute-0 podman[84454]: 2025-11-26 11:39:05.440900013 +0000 UTC m=+0.015782151 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:39:05 compute-0 ceph-mon[74928]: Removing key for mgr.compute-0.zduqno
Nov 26 11:39:05 compute-0 ceph-mon[74928]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 11:39:06 compute-0 interesting_kilby[84468]: --> passed data devices: 0 physical, 3 LVM
Nov 26 11:39:06 compute-0 interesting_kilby[84468]: --> relative data size: 1.0
Nov 26 11:39:06 compute-0 interesting_kilby[84468]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 26 11:39:06 compute-0 interesting_kilby[84468]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new a9ad59a0-aa2e-4d92-b571-519d2d145b6a
Nov 26 11:39:06 compute-0 ceph-mgr[75197]: [progress INFO root] Writing back 3 completed events
Nov 26 11:39:06 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 26 11:39:06 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:06 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:39:06 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "a9ad59a0-aa2e-4d92-b571-519d2d145b6a"} v 0) v1
Nov 26 11:39:06 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1347576333' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a9ad59a0-aa2e-4d92-b571-519d2d145b6a"}]: dispatch
Nov 26 11:39:06 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Nov 26 11:39:06 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 26 11:39:06 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1347576333' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "a9ad59a0-aa2e-4d92-b571-519d2d145b6a"}]': finished
Nov 26 11:39:06 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Nov 26 11:39:06 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Nov 26 11:39:06 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 26 11:39:06 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 26 11:39:06 compute-0 ceph-mgr[75197]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 26 11:39:06 compute-0 interesting_kilby[84468]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 26 11:39:06 compute-0 lvm[84529]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 26 11:39:06 compute-0 lvm[84529]: VG ceph_vg0 finished
Nov 26 11:39:06 compute-0 interesting_kilby[84468]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Nov 26 11:39:06 compute-0 interesting_kilby[84468]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Nov 26 11:39:06 compute-0 interesting_kilby[84468]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 26 11:39:06 compute-0 interesting_kilby[84468]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 26 11:39:06 compute-0 interesting_kilby[84468]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Nov 26 11:39:07 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Nov 26 11:39:07 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3049755266' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 26 11:39:07 compute-0 interesting_kilby[84468]:  stderr: got monmap epoch 1
Nov 26 11:39:07 compute-0 interesting_kilby[84468]: --> Creating keyring file for osd.0
Nov 26 11:39:07 compute-0 interesting_kilby[84468]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Nov 26 11:39:07 compute-0 interesting_kilby[84468]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Nov 26 11:39:07 compute-0 interesting_kilby[84468]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid a9ad59a0-aa2e-4d92-b571-519d2d145b6a --setuser ceph --setgroup ceph
Nov 26 11:39:07 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 11:39:07 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:07 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1347576333' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a9ad59a0-aa2e-4d92-b571-519d2d145b6a"}]: dispatch
Nov 26 11:39:07 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1347576333' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "a9ad59a0-aa2e-4d92-b571-519d2d145b6a"}]': finished
Nov 26 11:39:07 compute-0 ceph-mon[74928]: osdmap e4: 1 total, 0 up, 1 in
Nov 26 11:39:07 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 26 11:39:07 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3049755266' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 26 11:39:07 compute-0 ceph-mon[74928]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Nov 26 11:39:07 compute-0 ceph-mon[74928]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 26 11:39:08 compute-0 ceph-mon[74928]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 11:39:08 compute-0 ceph-mon[74928]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Nov 26 11:39:08 compute-0 ceph-mon[74928]: Cluster is now healthy
Nov 26 11:39:09 compute-0 interesting_kilby[84468]:  stderr: 2025-11-26T11:39:07.109+0000 7ffbbcc1a740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 26 11:39:09 compute-0 interesting_kilby[84468]:  stderr: 2025-11-26T11:39:07.109+0000 7ffbbcc1a740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 26 11:39:09 compute-0 interesting_kilby[84468]:  stderr: 2025-11-26T11:39:07.110+0000 7ffbbcc1a740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 26 11:39:09 compute-0 interesting_kilby[84468]:  stderr: 2025-11-26T11:39:07.110+0000 7ffbbcc1a740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Nov 26 11:39:09 compute-0 interesting_kilby[84468]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Nov 26 11:39:09 compute-0 interesting_kilby[84468]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 26 11:39:09 compute-0 interesting_kilby[84468]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Nov 26 11:39:09 compute-0 interesting_kilby[84468]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 26 11:39:09 compute-0 interesting_kilby[84468]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Nov 26 11:39:09 compute-0 interesting_kilby[84468]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 26 11:39:09 compute-0 interesting_kilby[84468]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 26 11:39:09 compute-0 interesting_kilby[84468]: --> ceph-volume lvm activate successful for osd ID: 0
Nov 26 11:39:09 compute-0 interesting_kilby[84468]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Nov 26 11:39:09 compute-0 interesting_kilby[84468]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 26 11:39:09 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 11:39:09 compute-0 interesting_kilby[84468]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 2627095b-eef8-4027-bfef-68bf7cb6801f
Nov 26 11:39:09 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "2627095b-eef8-4027-bfef-68bf7cb6801f"} v 0) v1
Nov 26 11:39:09 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1991319871' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2627095b-eef8-4027-bfef-68bf7cb6801f"}]: dispatch
Nov 26 11:39:09 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Nov 26 11:39:09 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 26 11:39:09 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1991319871' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "2627095b-eef8-4027-bfef-68bf7cb6801f"}]': finished
Nov 26 11:39:09 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Nov 26 11:39:09 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Nov 26 11:39:09 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 26 11:39:09 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 26 11:39:09 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 26 11:39:09 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 26 11:39:09 compute-0 ceph-mgr[75197]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 26 11:39:09 compute-0 ceph-mgr[75197]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 26 11:39:09 compute-0 lvm[85478]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 26 11:39:09 compute-0 lvm[85478]: VG ceph_vg1 finished
Nov 26 11:39:09 compute-0 interesting_kilby[84468]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 26 11:39:09 compute-0 interesting_kilby[84468]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Nov 26 11:39:09 compute-0 interesting_kilby[84468]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Nov 26 11:39:09 compute-0 interesting_kilby[84468]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 26 11:39:09 compute-0 interesting_kilby[84468]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 26 11:39:09 compute-0 interesting_kilby[84468]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Nov 26 11:39:10 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Nov 26 11:39:10 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3752627677' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 26 11:39:10 compute-0 interesting_kilby[84468]:  stderr: got monmap epoch 1
Nov 26 11:39:10 compute-0 interesting_kilby[84468]: --> Creating keyring file for osd.1
Nov 26 11:39:10 compute-0 interesting_kilby[84468]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Nov 26 11:39:10 compute-0 interesting_kilby[84468]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Nov 26 11:39:10 compute-0 interesting_kilby[84468]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 2627095b-eef8-4027-bfef-68bf7cb6801f --setuser ceph --setgroup ceph
Nov 26 11:39:10 compute-0 ceph-mon[74928]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 11:39:10 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1991319871' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2627095b-eef8-4027-bfef-68bf7cb6801f"}]: dispatch
Nov 26 11:39:10 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1991319871' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "2627095b-eef8-4027-bfef-68bf7cb6801f"}]': finished
Nov 26 11:39:10 compute-0 ceph-mon[74928]: osdmap e5: 2 total, 0 up, 2 in
Nov 26 11:39:10 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 26 11:39:10 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 26 11:39:10 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3752627677' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 26 11:39:11 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 11:39:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:39:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:39:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:39:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:39:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:39:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:39:11 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:39:12 compute-0 interesting_kilby[84468]:  stderr: 2025-11-26T11:39:10.131+0000 7fe0bb65f740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 26 11:39:12 compute-0 interesting_kilby[84468]:  stderr: 2025-11-26T11:39:10.131+0000 7fe0bb65f740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 26 11:39:12 compute-0 interesting_kilby[84468]:  stderr: 2025-11-26T11:39:10.131+0000 7fe0bb65f740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 26 11:39:12 compute-0 interesting_kilby[84468]:  stderr: 2025-11-26T11:39:10.131+0000 7fe0bb65f740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Nov 26 11:39:12 compute-0 interesting_kilby[84468]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Nov 26 11:39:12 compute-0 interesting_kilby[84468]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 26 11:39:12 compute-0 interesting_kilby[84468]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Nov 26 11:39:12 compute-0 interesting_kilby[84468]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 26 11:39:12 compute-0 interesting_kilby[84468]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Nov 26 11:39:12 compute-0 interesting_kilby[84468]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 26 11:39:12 compute-0 interesting_kilby[84468]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 26 11:39:12 compute-0 interesting_kilby[84468]: --> ceph-volume lvm activate successful for osd ID: 1
Nov 26 11:39:12 compute-0 interesting_kilby[84468]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Nov 26 11:39:12 compute-0 interesting_kilby[84468]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 26 11:39:12 compute-0 interesting_kilby[84468]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new d56156fb-7361-4bef-b06b-1320109b4323
Nov 26 11:39:12 compute-0 ceph-mon[74928]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 11:39:12 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "d56156fb-7361-4bef-b06b-1320109b4323"} v 0) v1
Nov 26 11:39:12 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3996719501' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d56156fb-7361-4bef-b06b-1320109b4323"}]: dispatch
Nov 26 11:39:12 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Nov 26 11:39:12 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 26 11:39:12 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3996719501' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d56156fb-7361-4bef-b06b-1320109b4323"}]': finished
Nov 26 11:39:12 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e6 e6: 3 total, 0 up, 3 in
Nov 26 11:39:12 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e6: 3 total, 0 up, 3 in
Nov 26 11:39:12 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 26 11:39:12 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 26 11:39:12 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 26 11:39:12 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 26 11:39:12 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 26 11:39:12 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 11:39:12 compute-0 ceph-mgr[75197]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 26 11:39:12 compute-0 ceph-mgr[75197]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 26 11:39:12 compute-0 ceph-mgr[75197]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 26 11:39:12 compute-0 interesting_kilby[84468]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 26 11:39:12 compute-0 lvm[86432]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 26 11:39:12 compute-0 lvm[86432]: VG ceph_vg2 finished
Nov 26 11:39:12 compute-0 interesting_kilby[84468]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Nov 26 11:39:12 compute-0 interesting_kilby[84468]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg2/ceph_lv2
Nov 26 11:39:12 compute-0 interesting_kilby[84468]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 26 11:39:12 compute-0 interesting_kilby[84468]: Running command: /usr/bin/ln -s /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 26 11:39:12 compute-0 interesting_kilby[84468]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Nov 26 11:39:13 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Nov 26 11:39:13 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/72481810' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 26 11:39:13 compute-0 interesting_kilby[84468]:  stderr: got monmap epoch 1
Nov 26 11:39:13 compute-0 interesting_kilby[84468]: --> Creating keyring file for osd.2
Nov 26 11:39:13 compute-0 interesting_kilby[84468]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Nov 26 11:39:13 compute-0 interesting_kilby[84468]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Nov 26 11:39:13 compute-0 interesting_kilby[84468]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid d56156fb-7361-4bef-b06b-1320109b4323 --setuser ceph --setgroup ceph
Nov 26 11:39:13 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 11:39:13 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3996719501' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d56156fb-7361-4bef-b06b-1320109b4323"}]: dispatch
Nov 26 11:39:13 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3996719501' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d56156fb-7361-4bef-b06b-1320109b4323"}]': finished
Nov 26 11:39:13 compute-0 ceph-mon[74928]: osdmap e6: 3 total, 0 up, 3 in
Nov 26 11:39:13 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 26 11:39:13 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 26 11:39:13 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 11:39:13 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/72481810' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 26 11:39:14 compute-0 ceph-mon[74928]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 11:39:15 compute-0 interesting_kilby[84468]:  stderr: 2025-11-26T11:39:13.126+0000 7efc40484740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 26 11:39:15 compute-0 interesting_kilby[84468]:  stderr: 2025-11-26T11:39:13.126+0000 7efc40484740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 26 11:39:15 compute-0 interesting_kilby[84468]:  stderr: 2025-11-26T11:39:13.126+0000 7efc40484740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 26 11:39:15 compute-0 interesting_kilby[84468]:  stderr: 2025-11-26T11:39:13.127+0000 7efc40484740 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Nov 26 11:39:15 compute-0 interesting_kilby[84468]: --> ceph-volume lvm prepare successful for: ceph_vg2/ceph_lv2
Nov 26 11:39:15 compute-0 interesting_kilby[84468]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 26 11:39:15 compute-0 interesting_kilby[84468]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Nov 26 11:39:15 compute-0 interesting_kilby[84468]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 26 11:39:15 compute-0 interesting_kilby[84468]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Nov 26 11:39:15 compute-0 interesting_kilby[84468]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 26 11:39:15 compute-0 interesting_kilby[84468]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 26 11:39:15 compute-0 interesting_kilby[84468]: --> ceph-volume lvm activate successful for osd ID: 2
Nov 26 11:39:15 compute-0 interesting_kilby[84468]: --> ceph-volume lvm create successful for: ceph_vg2/ceph_lv2
Nov 26 11:39:15 compute-0 systemd[1]: libpod-90eb334e12824738f79c63271ee0d87e2b845dd5bfe0bea7b78c459da15bdc2b.scope: Deactivated successfully.
Nov 26 11:39:15 compute-0 systemd[1]: libpod-90eb334e12824738f79c63271ee0d87e2b845dd5bfe0bea7b78c459da15bdc2b.scope: Consumed 4.077s CPU time.
Nov 26 11:39:15 compute-0 podman[87351]: 2025-11-26 11:39:15.37065819 +0000 UTC m=+0.016501380 container died 90eb334e12824738f79c63271ee0d87e2b845dd5bfe0bea7b78c459da15bdc2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_kilby, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 26 11:39:15 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 11:39:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-dddb69942005c67c30f91527355d981bf004472b26309c2b347ef1e8758b3236-merged.mount: Deactivated successfully.
Nov 26 11:39:15 compute-0 podman[87351]: 2025-11-26 11:39:15.403019733 +0000 UTC m=+0.048862913 container remove 90eb334e12824738f79c63271ee0d87e2b845dd5bfe0bea7b78c459da15bdc2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 26 11:39:15 compute-0 systemd[1]: libpod-conmon-90eb334e12824738f79c63271ee0d87e2b845dd5bfe0bea7b78c459da15bdc2b.scope: Deactivated successfully.
Nov 26 11:39:15 compute-0 sudo[84364]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:15 compute-0 sudo[87363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:39:15 compute-0 sudo[87363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:15 compute-0 sudo[87363]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:15 compute-0 sudo[87388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:39:15 compute-0 sudo[87388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:15 compute-0 sudo[87388]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:15 compute-0 sudo[87413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:39:15 compute-0 sudo[87413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:15 compute-0 sudo[87413]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:15 compute-0 sudo[87438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -- lvm list --format json
Nov 26 11:39:15 compute-0 sudo[87438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:15 compute-0 podman[87493]: 2025-11-26 11:39:15.787388651 +0000 UTC m=+0.024794320 container create 0a8360f6590a97d81edd0c994609eb5311993a20f1bf05278b020e1282739dbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 26 11:39:15 compute-0 systemd[1]: Started libpod-conmon-0a8360f6590a97d81edd0c994609eb5311993a20f1bf05278b020e1282739dbc.scope.
Nov 26 11:39:15 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:39:15 compute-0 podman[87493]: 2025-11-26 11:39:15.841516098 +0000 UTC m=+0.078921777 container init 0a8360f6590a97d81edd0c994609eb5311993a20f1bf05278b020e1282739dbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_ardinghelli, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 26 11:39:15 compute-0 podman[87493]: 2025-11-26 11:39:15.845482465 +0000 UTC m=+0.082888124 container start 0a8360f6590a97d81edd0c994609eb5311993a20f1bf05278b020e1282739dbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 26 11:39:15 compute-0 podman[87493]: 2025-11-26 11:39:15.846472015 +0000 UTC m=+0.083877674 container attach 0a8360f6590a97d81edd0c994609eb5311993a20f1bf05278b020e1282739dbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_ardinghelli, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 26 11:39:15 compute-0 loving_ardinghelli[87506]: 167 167
Nov 26 11:39:15 compute-0 systemd[1]: libpod-0a8360f6590a97d81edd0c994609eb5311993a20f1bf05278b020e1282739dbc.scope: Deactivated successfully.
Nov 26 11:39:15 compute-0 conmon[87506]: conmon 0a8360f6590a97d81edd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0a8360f6590a97d81edd0c994609eb5311993a20f1bf05278b020e1282739dbc.scope/container/memory.events
Nov 26 11:39:15 compute-0 podman[87493]: 2025-11-26 11:39:15.84951107 +0000 UTC m=+0.086916728 container died 0a8360f6590a97d81edd0c994609eb5311993a20f1bf05278b020e1282739dbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_ardinghelli, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:39:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-4eee7229952447a81cde7ddfddebbfa2fd1e2a7e0151e66d0cfac6609fa2ca59-merged.mount: Deactivated successfully.
Nov 26 11:39:15 compute-0 podman[87493]: 2025-11-26 11:39:15.866027538 +0000 UTC m=+0.103433196 container remove 0a8360f6590a97d81edd0c994609eb5311993a20f1bf05278b020e1282739dbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 26 11:39:15 compute-0 podman[87493]: 2025-11-26 11:39:15.777253622 +0000 UTC m=+0.014659301 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:39:15 compute-0 systemd[1]: libpod-conmon-0a8360f6590a97d81edd0c994609eb5311993a20f1bf05278b020e1282739dbc.scope: Deactivated successfully.
Nov 26 11:39:15 compute-0 podman[87529]: 2025-11-26 11:39:15.97083509 +0000 UTC m=+0.027167994 container create 96ab60654853d501ae1a4637e55da642398041ed31447e28b485d1d0147e3bf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_khayyam, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 26 11:39:15 compute-0 systemd[1]: Started libpod-conmon-96ab60654853d501ae1a4637e55da642398041ed31447e28b485d1d0147e3bf7.scope.
Nov 26 11:39:16 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:39:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb7e2fd20d5a2edab5892c0bcdbc4711b624831b2c96a93f496c17756825c985/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb7e2fd20d5a2edab5892c0bcdbc4711b624831b2c96a93f496c17756825c985/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb7e2fd20d5a2edab5892c0bcdbc4711b624831b2c96a93f496c17756825c985/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb7e2fd20d5a2edab5892c0bcdbc4711b624831b2c96a93f496c17756825c985/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:16 compute-0 podman[87529]: 2025-11-26 11:39:16.07982013 +0000 UTC m=+0.136153056 container init 96ab60654853d501ae1a4637e55da642398041ed31447e28b485d1d0147e3bf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_khayyam, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 26 11:39:16 compute-0 podman[87529]: 2025-11-26 11:39:15.959897186 +0000 UTC m=+0.016230112 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:39:16 compute-0 podman[87529]: 2025-11-26 11:39:16.085007058 +0000 UTC m=+0.141339963 container start 96ab60654853d501ae1a4637e55da642398041ed31447e28b485d1d0147e3bf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:39:16 compute-0 podman[87529]: 2025-11-26 11:39:16.087694521 +0000 UTC m=+0.144027436 container attach 96ab60654853d501ae1a4637e55da642398041ed31447e28b485d1d0147e3bf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_khayyam, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:39:16 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:39:16 compute-0 ceph-mon[74928]: pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]: {
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:     "0": [
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:         {
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:             "devices": [
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:                 "/dev/loop3"
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:             ],
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:             "lv_name": "ceph_lv0",
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:             "lv_size": "21470642176",
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a9ad59a0-aa2e-4d92-b571-519d2d145b6a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:             "lv_uuid": "Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ",
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:             "name": "ceph_lv0",
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:             "tags": {
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:                 "ceph.block_uuid": "Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ",
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:                 "ceph.cluster_name": "ceph",
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:                 "ceph.crush_device_class": "",
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:                 "ceph.encrypted": "0",
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:                 "ceph.osd_fsid": "a9ad59a0-aa2e-4d92-b571-519d2d145b6a",
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:                 "ceph.osd_id": "0",
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:                 "ceph.type": "block",
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:                 "ceph.vdo": "0"
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:             },
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:             "type": "block",
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:             "vg_name": "ceph_vg0"
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:         }
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:     ],
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:     "1": [
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:         {
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:             "devices": [
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:                 "/dev/loop4"
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:             ],
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:             "lv_name": "ceph_lv1",
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:             "lv_size": "21470642176",
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2627095b-eef8-4027-bfef-68bf7cb6801f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:             "lv_uuid": "T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS",
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:             "name": "ceph_lv1",
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:             "tags": {
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:                 "ceph.block_uuid": "T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS",
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:                 "ceph.cluster_name": "ceph",
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:                 "ceph.crush_device_class": "",
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:                 "ceph.encrypted": "0",
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:                 "ceph.osd_fsid": "2627095b-eef8-4027-bfef-68bf7cb6801f",
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:                 "ceph.osd_id": "1",
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:                 "ceph.type": "block",
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:                 "ceph.vdo": "0"
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:             },
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:             "type": "block",
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:             "vg_name": "ceph_vg1"
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:         }
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:     ],
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:     "2": [
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:         {
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:             "devices": [
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:                 "/dev/loop5"
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:             ],
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:             "lv_name": "ceph_lv2",
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:             "lv_size": "21470642176",
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d56156fb-7361-4bef-b06b-1320109b4323,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:             "lv_uuid": "PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF",
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:             "name": "ceph_lv2",
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:             "tags": {
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:                 "ceph.block_uuid": "PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF",
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:                 "ceph.cluster_name": "ceph",
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:                 "ceph.crush_device_class": "",
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:                 "ceph.encrypted": "0",
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:                 "ceph.osd_fsid": "d56156fb-7361-4bef-b06b-1320109b4323",
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:                 "ceph.osd_id": "2",
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:                 "ceph.type": "block",
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:                 "ceph.vdo": "0"
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:             },
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:             "type": "block",
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:             "vg_name": "ceph_vg2"
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:         }
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]:     ]
Nov 26 11:39:16 compute-0 fervent_khayyam[87542]: }
Nov 26 11:39:16 compute-0 systemd[1]: libpod-96ab60654853d501ae1a4637e55da642398041ed31447e28b485d1d0147e3bf7.scope: Deactivated successfully.
Nov 26 11:39:16 compute-0 podman[87529]: 2025-11-26 11:39:16.715041628 +0000 UTC m=+0.771374553 container died 96ab60654853d501ae1a4637e55da642398041ed31447e28b485d1d0147e3bf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_khayyam, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 26 11:39:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb7e2fd20d5a2edab5892c0bcdbc4711b624831b2c96a93f496c17756825c985-merged.mount: Deactivated successfully.
Nov 26 11:39:16 compute-0 podman[87529]: 2025-11-26 11:39:16.744619102 +0000 UTC m=+0.800952007 container remove 96ab60654853d501ae1a4637e55da642398041ed31447e28b485d1d0147e3bf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_khayyam, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:39:16 compute-0 systemd[1]: libpod-conmon-96ab60654853d501ae1a4637e55da642398041ed31447e28b485d1d0147e3bf7.scope: Deactivated successfully.
Nov 26 11:39:16 compute-0 sudo[87438]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:16 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) v1
Nov 26 11:39:16 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 26 11:39:16 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 11:39:16 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:39:16 compute-0 ceph-mgr[75197]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Nov 26 11:39:16 compute-0 ceph-mgr[75197]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Nov 26 11:39:16 compute-0 sudo[87561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:39:16 compute-0 sudo[87561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:16 compute-0 sudo[87561]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:16 compute-0 sudo[87586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:39:16 compute-0 sudo[87586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:16 compute-0 sudo[87586]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:16 compute-0 sudo[87611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:39:16 compute-0 sudo[87611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:16 compute-0 sudo[87611]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:16 compute-0 sudo[87636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7
Nov 26 11:39:16 compute-0 sudo[87636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:17 compute-0 podman[87695]: 2025-11-26 11:39:17.154461386 +0000 UTC m=+0.026125824 container create 2bdc13ee16ff963da7b467286013384be82f95bb967f8f67283325d475691a9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_easley, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 26 11:39:17 compute-0 systemd[1]: Started libpod-conmon-2bdc13ee16ff963da7b467286013384be82f95bb967f8f67283325d475691a9d.scope.
Nov 26 11:39:17 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:39:17 compute-0 podman[87695]: 2025-11-26 11:39:17.20700497 +0000 UTC m=+0.078669408 container init 2bdc13ee16ff963da7b467286013384be82f95bb967f8f67283325d475691a9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_easley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:39:17 compute-0 podman[87695]: 2025-11-26 11:39:17.211392158 +0000 UTC m=+0.083056587 container start 2bdc13ee16ff963da7b467286013384be82f95bb967f8f67283325d475691a9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_easley, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:39:17 compute-0 podman[87695]: 2025-11-26 11:39:17.212551754 +0000 UTC m=+0.084216182 container attach 2bdc13ee16ff963da7b467286013384be82f95bb967f8f67283325d475691a9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:39:17 compute-0 eager_easley[87709]: 167 167
Nov 26 11:39:17 compute-0 systemd[1]: libpod-2bdc13ee16ff963da7b467286013384be82f95bb967f8f67283325d475691a9d.scope: Deactivated successfully.
Nov 26 11:39:17 compute-0 podman[87695]: 2025-11-26 11:39:17.214370408 +0000 UTC m=+0.086034876 container died 2bdc13ee16ff963da7b467286013384be82f95bb967f8f67283325d475691a9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_easley, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:39:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-1fffa30ab3a6afb42083da333d32e7408968f40c95ff82610a66020e31d699c5-merged.mount: Deactivated successfully.
Nov 26 11:39:17 compute-0 podman[87695]: 2025-11-26 11:39:17.231253115 +0000 UTC m=+0.102917542 container remove 2bdc13ee16ff963da7b467286013384be82f95bb967f8f67283325d475691a9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_easley, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 26 11:39:17 compute-0 podman[87695]: 2025-11-26 11:39:17.143602896 +0000 UTC m=+0.015267334 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:39:17 compute-0 systemd[1]: libpod-conmon-2bdc13ee16ff963da7b467286013384be82f95bb967f8f67283325d475691a9d.scope: Deactivated successfully.
Nov 26 11:39:17 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 11:39:17 compute-0 podman[87738]: 2025-11-26 11:39:17.405906017 +0000 UTC m=+0.026367444 container create bfb2d2f0bd0f6351d534125f9917d489d4ae6d39c34bcecb2cb3d491474a3bb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-0-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 26 11:39:17 compute-0 systemd[1]: Started libpod-conmon-bfb2d2f0bd0f6351d534125f9917d489d4ae6d39c34bcecb2cb3d491474a3bb3.scope.
Nov 26 11:39:17 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:39:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17694ee169c282e0a94e882be249956aaa745bf80e41a4a2dff87757958d1e72/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17694ee169c282e0a94e882be249956aaa745bf80e41a4a2dff87757958d1e72/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17694ee169c282e0a94e882be249956aaa745bf80e41a4a2dff87757958d1e72/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17694ee169c282e0a94e882be249956aaa745bf80e41a4a2dff87757958d1e72/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17694ee169c282e0a94e882be249956aaa745bf80e41a4a2dff87757958d1e72/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:17 compute-0 podman[87738]: 2025-11-26 11:39:17.466827332 +0000 UTC m=+0.087288749 container init bfb2d2f0bd0f6351d534125f9917d489d4ae6d39c34bcecb2cb3d491474a3bb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-0-activate-test, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Nov 26 11:39:17 compute-0 podman[87738]: 2025-11-26 11:39:17.471210985 +0000 UTC m=+0.091672383 container start bfb2d2f0bd0f6351d534125f9917d489d4ae6d39c34bcecb2cb3d491474a3bb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-0-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:39:17 compute-0 podman[87738]: 2025-11-26 11:39:17.472325373 +0000 UTC m=+0.092786771 container attach bfb2d2f0bd0f6351d534125f9917d489d4ae6d39c34bcecb2cb3d491474a3bb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-0-activate-test, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:39:17 compute-0 podman[87738]: 2025-11-26 11:39:17.395269851 +0000 UTC m=+0.015731268 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:39:17 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 26 11:39:17 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:39:17 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-0-activate-test[87751]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Nov 26 11:39:17 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-0-activate-test[87751]:                             [--no-systemd] [--no-tmpfs]
Nov 26 11:39:17 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-0-activate-test[87751]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 26 11:39:18 compute-0 systemd[1]: libpod-bfb2d2f0bd0f6351d534125f9917d489d4ae6d39c34bcecb2cb3d491474a3bb3.scope: Deactivated successfully.
Nov 26 11:39:18 compute-0 podman[87738]: 2025-11-26 11:39:18.016791059 +0000 UTC m=+0.637252457 container died bfb2d2f0bd0f6351d534125f9917d489d4ae6d39c34bcecb2cb3d491474a3bb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-0-activate-test, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 26 11:39:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-17694ee169c282e0a94e882be249956aaa745bf80e41a4a2dff87757958d1e72-merged.mount: Deactivated successfully.
Nov 26 11:39:18 compute-0 podman[87738]: 2025-11-26 11:39:18.046544772 +0000 UTC m=+0.667006169 container remove bfb2d2f0bd0f6351d534125f9917d489d4ae6d39c34bcecb2cb3d491474a3bb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-0-activate-test, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 26 11:39:18 compute-0 systemd[1]: libpod-conmon-bfb2d2f0bd0f6351d534125f9917d489d4ae6d39c34bcecb2cb3d491474a3bb3.scope: Deactivated successfully.
Nov 26 11:39:18 compute-0 systemd[1]: Reloading.
Nov 26 11:39:18 compute-0 systemd-rc-local-generator[87805]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:39:18 compute-0 systemd-sysv-generator[87809]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:39:18 compute-0 systemd[1]: Reloading.
Nov 26 11:39:18 compute-0 systemd-rc-local-generator[87846]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:39:18 compute-0 systemd-sysv-generator[87849]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:39:18 compute-0 systemd[1]: Starting Ceph osd.0 for ebab460c-3fd7-5f66-aa87-e10c143123f7...
Nov 26 11:39:18 compute-0 ceph-mon[74928]: Deploying daemon osd.0 on compute-0
Nov 26 11:39:18 compute-0 ceph-mon[74928]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 11:39:18 compute-0 podman[87900]: 2025-11-26 11:39:18.772695041 +0000 UTC m=+0.027927856 container create fab146ed64ef75c67f4f21442d9d423227ad032347dd1c5378f5f04e9a291f3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-0-activate, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 26 11:39:18 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:39:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6317c7e899319ea04363a72f1ef677637dd1851b19477b92a186d85896211a4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6317c7e899319ea04363a72f1ef677637dd1851b19477b92a186d85896211a4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6317c7e899319ea04363a72f1ef677637dd1851b19477b92a186d85896211a4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6317c7e899319ea04363a72f1ef677637dd1851b19477b92a186d85896211a4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6317c7e899319ea04363a72f1ef677637dd1851b19477b92a186d85896211a4/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:18 compute-0 podman[87900]: 2025-11-26 11:39:18.817471608 +0000 UTC m=+0.072704422 container init fab146ed64ef75c67f4f21442d9d423227ad032347dd1c5378f5f04e9a291f3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-0-activate, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:39:18 compute-0 podman[87900]: 2025-11-26 11:39:18.824657954 +0000 UTC m=+0.079890769 container start fab146ed64ef75c67f4f21442d9d423227ad032347dd1c5378f5f04e9a291f3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-0-activate, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 26 11:39:18 compute-0 podman[87900]: 2025-11-26 11:39:18.825894015 +0000 UTC m=+0.081126830 container attach fab146ed64ef75c67f4f21442d9d423227ad032347dd1c5378f5f04e9a291f3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-0-activate, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:39:18 compute-0 podman[87900]: 2025-11-26 11:39:18.760842651 +0000 UTC m=+0.016075486 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:39:19 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 11:39:19 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-0-activate[87912]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 26 11:39:19 compute-0 bash[87900]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 26 11:39:19 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-0-activate[87912]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Nov 26 11:39:19 compute-0 bash[87900]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Nov 26 11:39:19 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-0-activate[87912]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Nov 26 11:39:19 compute-0 bash[87900]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Nov 26 11:39:19 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-0-activate[87912]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 26 11:39:19 compute-0 bash[87900]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 26 11:39:19 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-0-activate[87912]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 26 11:39:19 compute-0 bash[87900]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 26 11:39:19 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-0-activate[87912]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 26 11:39:19 compute-0 bash[87900]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 26 11:39:19 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-0-activate[87912]: --> ceph-volume raw activate successful for osd ID: 0
Nov 26 11:39:19 compute-0 bash[87900]: --> ceph-volume raw activate successful for osd ID: 0
Nov 26 11:39:19 compute-0 systemd[1]: libpod-fab146ed64ef75c67f4f21442d9d423227ad032347dd1c5378f5f04e9a291f3a.scope: Deactivated successfully.
Nov 26 11:39:19 compute-0 podman[88027]: 2025-11-26 11:39:19.647809184 +0000 UTC m=+0.018024870 container died fab146ed64ef75c67f4f21442d9d423227ad032347dd1c5378f5f04e9a291f3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-0-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 26 11:39:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-b6317c7e899319ea04363a72f1ef677637dd1851b19477b92a186d85896211a4-merged.mount: Deactivated successfully.
Nov 26 11:39:19 compute-0 podman[88027]: 2025-11-26 11:39:19.678395516 +0000 UTC m=+0.048611203 container remove fab146ed64ef75c67f4f21442d9d423227ad032347dd1c5378f5f04e9a291f3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-0-activate, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:39:19 compute-0 podman[88075]: 2025-11-26 11:39:19.808433109 +0000 UTC m=+0.023398264 container create 9ab3606df1c80b7e229cdc7a98b590808a4781cdae7cea4a12ad26e24aa49c67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 26 11:39:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09f0cbc514f32c3773d0470eeb544e3be5c3f62eb640762cdce0b83a42e13390/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09f0cbc514f32c3773d0470eeb544e3be5c3f62eb640762cdce0b83a42e13390/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09f0cbc514f32c3773d0470eeb544e3be5c3f62eb640762cdce0b83a42e13390/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09f0cbc514f32c3773d0470eeb544e3be5c3f62eb640762cdce0b83a42e13390/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09f0cbc514f32c3773d0470eeb544e3be5c3f62eb640762cdce0b83a42e13390/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:19 compute-0 podman[88075]: 2025-11-26 11:39:19.841883642 +0000 UTC m=+0.056848797 container init 9ab3606df1c80b7e229cdc7a98b590808a4781cdae7cea4a12ad26e24aa49c67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 26 11:39:19 compute-0 podman[88075]: 2025-11-26 11:39:19.846867692 +0000 UTC m=+0.061832848 container start 9ab3606df1c80b7e229cdc7a98b590808a4781cdae7cea4a12ad26e24aa49c67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 26 11:39:19 compute-0 bash[88075]: 9ab3606df1c80b7e229cdc7a98b590808a4781cdae7cea4a12ad26e24aa49c67
Nov 26 11:39:19 compute-0 podman[88075]: 2025-11-26 11:39:19.798669008 +0000 UTC m=+0.013634183 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:39:19 compute-0 systemd[1]: Started Ceph osd.0 for ebab460c-3fd7-5f66-aa87-e10c143123f7.
Nov 26 11:39:19 compute-0 sudo[87636]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:19 compute-0 ceph-osd[88091]: set uid:gid to 167:167 (ceph:ceph)
Nov 26 11:39:19 compute-0 ceph-osd[88091]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Nov 26 11:39:19 compute-0 ceph-osd[88091]: pidfile_write: ignore empty --pid-file
Nov 26 11:39:19 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 11:39:19 compute-0 ceph-osd[88091]: bdev(0x563a3da23800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 26 11:39:19 compute-0 ceph-osd[88091]: bdev(0x563a3da23800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 26 11:39:19 compute-0 ceph-osd[88091]: bdev(0x563a3da23800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 11:39:19 compute-0 ceph-osd[88091]: bdev(0x563a3da23800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 11:39:19 compute-0 ceph-osd[88091]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 26 11:39:19 compute-0 ceph-osd[88091]: bdev(0x563a3e85b800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 26 11:39:19 compute-0 ceph-osd[88091]: bdev(0x563a3e85b800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 26 11:39:19 compute-0 ceph-osd[88091]: bdev(0x563a3e85b800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 11:39:19 compute-0 ceph-osd[88091]: bdev(0x563a3e85b800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 11:39:19 compute-0 ceph-osd[88091]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 26 11:39:19 compute-0 ceph-osd[88091]: bdev(0x563a3e85b800 /var/lib/ceph/osd/ceph-0/block) close
Nov 26 11:39:19 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:19 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 11:39:19 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:19 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) v1
Nov 26 11:39:19 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 26 11:39:19 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 11:39:19 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:39:19 compute-0 ceph-mgr[75197]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Nov 26 11:39:19 compute-0 ceph-mgr[75197]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Nov 26 11:39:19 compute-0 sudo[88104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:39:19 compute-0 sudo[88104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:19 compute-0 sudo[88104]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:19 compute-0 sudo[88129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:39:19 compute-0 sudo[88129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:19 compute-0 sudo[88129]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:19 compute-0 sudo[88154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:39:19 compute-0 sudo[88154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:19 compute-0 sudo[88154]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:20 compute-0 sudo[88179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7
Nov 26 11:39:20 compute-0 sudo[88179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:20 compute-0 ceph-osd[88091]: bdev(0x563a3da23800 /var/lib/ceph/osd/ceph-0/block) close
Nov 26 11:39:20 compute-0 podman[88241]: 2025-11-26 11:39:20.258389484 +0000 UTC m=+0.028365130 container create ef653004cd911773869f71c6a83e6b0fe4dc57125ca16df9888009c86277bf21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_zhukovsky, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:39:20 compute-0 systemd[1]: Started libpod-conmon-ef653004cd911773869f71c6a83e6b0fe4dc57125ca16df9888009c86277bf21.scope.
Nov 26 11:39:20 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:39:20 compute-0 podman[88241]: 2025-11-26 11:39:20.314219113 +0000 UTC m=+0.084194760 container init ef653004cd911773869f71c6a83e6b0fe4dc57125ca16df9888009c86277bf21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:39:20 compute-0 podman[88241]: 2025-11-26 11:39:20.318625901 +0000 UTC m=+0.088601548 container start ef653004cd911773869f71c6a83e6b0fe4dc57125ca16df9888009c86277bf21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:39:20 compute-0 podman[88241]: 2025-11-26 11:39:20.319781839 +0000 UTC m=+0.089757486 container attach ef653004cd911773869f71c6a83e6b0fe4dc57125ca16df9888009c86277bf21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_zhukovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:39:20 compute-0 elastic_zhukovsky[88256]: 167 167
Nov 26 11:39:20 compute-0 systemd[1]: libpod-ef653004cd911773869f71c6a83e6b0fe4dc57125ca16df9888009c86277bf21.scope: Deactivated successfully.
Nov 26 11:39:20 compute-0 podman[88241]: 2025-11-26 11:39:20.32235839 +0000 UTC m=+0.092334036 container died ef653004cd911773869f71c6a83e6b0fe4dc57125ca16df9888009c86277bf21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_zhukovsky, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 26 11:39:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-a9e5664a7e93901c8475bff534ebc6851968bc33b5d3e31a6b3dcf10e83752e9-merged.mount: Deactivated successfully.
Nov 26 11:39:20 compute-0 podman[88241]: 2025-11-26 11:39:20.338483661 +0000 UTC m=+0.108459308 container remove ef653004cd911773869f71c6a83e6b0fe4dc57125ca16df9888009c86277bf21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_zhukovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 26 11:39:20 compute-0 podman[88241]: 2025-11-26 11:39:20.246912702 +0000 UTC m=+0.016888359 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:39:20 compute-0 systemd[1]: libpod-conmon-ef653004cd911773869f71c6a83e6b0fe4dc57125ca16df9888009c86277bf21.scope: Deactivated successfully.
Nov 26 11:39:20 compute-0 ceph-osd[88091]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Nov 26 11:39:20 compute-0 ceph-osd[88091]: load: jerasure load: lrc 
Nov 26 11:39:20 compute-0 ceph-osd[88091]: bdev(0x563a3e8e6c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 26 11:39:20 compute-0 ceph-osd[88091]: bdev(0x563a3e8e6c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 26 11:39:20 compute-0 ceph-osd[88091]: bdev(0x563a3e8e6c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 11:39:20 compute-0 ceph-osd[88091]: bdev(0x563a3e8e6c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 11:39:20 compute-0 ceph-osd[88091]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 26 11:39:20 compute-0 ceph-osd[88091]: bdev(0x563a3e8e6c00 /var/lib/ceph/osd/ceph-0/block) close
Nov 26 11:39:20 compute-0 ceph-osd[88091]: bdev(0x563a3e8e6c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 26 11:39:20 compute-0 ceph-osd[88091]: bdev(0x563a3e8e6c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 26 11:39:20 compute-0 ceph-osd[88091]: bdev(0x563a3e8e6c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 11:39:20 compute-0 ceph-osd[88091]: bdev(0x563a3e8e6c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 11:39:20 compute-0 ceph-osd[88091]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 26 11:39:20 compute-0 ceph-osd[88091]: bdev(0x563a3e8e6c00 /var/lib/ceph/osd/ceph-0/block) close
Nov 26 11:39:20 compute-0 podman[88295]: 2025-11-26 11:39:20.515485619 +0000 UTC m=+0.025883310 container create 6d3642fbc4fc959f232c060b0e1f80e1b17b7e924339fe178176aa9006a8377d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-1-activate-test, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 26 11:39:20 compute-0 systemd[1]: Started libpod-conmon-6d3642fbc4fc959f232c060b0e1f80e1b17b7e924339fe178176aa9006a8377d.scope.
Nov 26 11:39:20 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:39:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3a17b857afd7a3a68f4d6759e17881b529412195fe6743ec2f43ef99ada76d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3a17b857afd7a3a68f4d6759e17881b529412195fe6743ec2f43ef99ada76d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3a17b857afd7a3a68f4d6759e17881b529412195fe6743ec2f43ef99ada76d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3a17b857afd7a3a68f4d6759e17881b529412195fe6743ec2f43ef99ada76d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3a17b857afd7a3a68f4d6759e17881b529412195fe6743ec2f43ef99ada76d7/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:20 compute-0 podman[88295]: 2025-11-26 11:39:20.57618983 +0000 UTC m=+0.086587520 container init 6d3642fbc4fc959f232c060b0e1f80e1b17b7e924339fe178176aa9006a8377d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-1-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 26 11:39:20 compute-0 podman[88295]: 2025-11-26 11:39:20.580738568 +0000 UTC m=+0.091136259 container start 6d3642fbc4fc959f232c060b0e1f80e1b17b7e924339fe178176aa9006a8377d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-1-activate-test, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:39:20 compute-0 podman[88295]: 2025-11-26 11:39:20.582067887 +0000 UTC m=+0.092465578 container attach 6d3642fbc4fc959f232c060b0e1f80e1b17b7e924339fe178176aa9006a8377d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-1-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 26 11:39:20 compute-0 podman[88295]: 2025-11-26 11:39:20.505274024 +0000 UTC m=+0.015671735 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:39:20 compute-0 ceph-mon[74928]: pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 11:39:20 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:20 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:20 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 26 11:39:20 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:39:20 compute-0 ceph-osd[88091]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 26 11:39:20 compute-0 ceph-osd[88091]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 26 11:39:20 compute-0 ceph-osd[88091]: bdev(0x563a3e8e6c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 26 11:39:20 compute-0 ceph-osd[88091]: bdev(0x563a3e8e6c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 26 11:39:20 compute-0 ceph-osd[88091]: bdev(0x563a3e8e6c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 11:39:20 compute-0 ceph-osd[88091]: bdev(0x563a3e8e6c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 11:39:20 compute-0 ceph-osd[88091]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 26 11:39:20 compute-0 ceph-osd[88091]: bdev(0x563a3e8e7400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 26 11:39:20 compute-0 ceph-osd[88091]: bdev(0x563a3e8e7400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 26 11:39:20 compute-0 ceph-osd[88091]: bdev(0x563a3e8e7400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 11:39:20 compute-0 ceph-osd[88091]: bdev(0x563a3e8e7400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 11:39:20 compute-0 ceph-osd[88091]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 26 11:39:20 compute-0 ceph-osd[88091]: bluefs mount
Nov 26 11:39:20 compute-0 ceph-osd[88091]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: bluefs mount shared_bdev_used = 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: RocksDB version: 7.9.2
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Git sha 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: DB SUMMARY
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: DB Session ID:  14RW6K26HQ481YNT15KS
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: CURRENT file:  CURRENT
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: IDENTITY file:  IDENTITY
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                         Options.error_if_exists: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                       Options.create_if_missing: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                         Options.paranoid_checks: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                                     Options.env: 0x563a3e8adc70
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                                Options.info_log: 0x563a3daaa8a0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.max_file_opening_threads: 16
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                              Options.statistics: (nil)
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                               Options.use_fsync: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                       Options.max_log_file_size: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                         Options.allow_fallocate: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                        Options.use_direct_reads: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.create_missing_column_families: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                              Options.db_log_dir: 
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                                 Options.wal_dir: db.wal
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.advise_random_on_open: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                    Options.write_buffer_manager: 0x563a3e9c0460
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                            Options.rate_limiter: (nil)
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.unordered_write: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                               Options.row_cache: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                              Options.wal_filter: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.allow_ingest_behind: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.two_write_queues: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.manual_wal_flush: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.wal_compression: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.atomic_flush: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                 Options.log_readahead_size: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.allow_data_in_errors: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.db_host_id: __hostname__
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.max_background_jobs: 4
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.max_background_compactions: -1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.max_subcompactions: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                          Options.max_open_files: -1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                          Options.bytes_per_sync: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.max_background_flushes: -1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Compression algorithms supported:
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         kZSTD supported: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         kXpressCompression supported: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         kBZip2Compression supported: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         kLZ4Compression supported: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         kZlibCompression supported: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         kLZ4HCCompression supported: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         kSnappyCompression supported: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.compaction_filter: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563a3daaa2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563a3da971f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.compression: LZ4
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.num_levels: 7
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                           Options.bloom_locality: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                               Options.ttl: 2592000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                       Options.enable_blob_files: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                           Options.min_blob_size: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:           Options.merge_operator: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.compaction_filter: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563a3daaa2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563a3da971f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.compression: LZ4
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.num_levels: 7
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                           Options.bloom_locality: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                               Options.ttl: 2592000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                       Options.enable_blob_files: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                           Options.min_blob_size: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:           Options.merge_operator: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.compaction_filter: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563a3daaa2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563a3da971f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.compression: LZ4
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.num_levels: 7
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                           Options.bloom_locality: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                               Options.ttl: 2592000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                       Options.enable_blob_files: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                           Options.min_blob_size: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:           Options.merge_operator: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.compaction_filter: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563a3daaa2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563a3da971f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.compression: LZ4
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.num_levels: 7
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                           Options.bloom_locality: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                               Options.ttl: 2592000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                       Options.enable_blob_files: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                           Options.min_blob_size: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:           Options.merge_operator: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.compaction_filter: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563a3daaa2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563a3da971f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.compression: LZ4
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.num_levels: 7
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                           Options.bloom_locality: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                               Options.ttl: 2592000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                       Options.enable_blob_files: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                           Options.min_blob_size: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:           Options.merge_operator: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.compaction_filter: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563a3daaa2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563a3da971f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.compression: LZ4
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.num_levels: 7
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                           Options.bloom_locality: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                               Options.ttl: 2592000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                       Options.enable_blob_files: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                           Options.min_blob_size: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:           Options.merge_operator: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.compaction_filter: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563a3daaa2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563a3da971f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.compression: LZ4
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.num_levels: 7
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                           Options.bloom_locality: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                               Options.ttl: 2592000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                       Options.enable_blob_files: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                           Options.min_blob_size: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:           Options.merge_operator: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.compaction_filter: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563a3daaa240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563a3da97090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.compression: LZ4
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.num_levels: 7
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                           Options.bloom_locality: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                               Options.ttl: 2592000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                       Options.enable_blob_files: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                           Options.min_blob_size: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:           Options.merge_operator: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.compaction_filter: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563a3daaa240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563a3da97090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.compression: LZ4
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.num_levels: 7
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                           Options.bloom_locality: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                               Options.ttl: 2592000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                       Options.enable_blob_files: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                           Options.min_blob_size: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:           Options.merge_operator: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.compaction_filter: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563a3daaa240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563a3da97090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.compression: LZ4
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.num_levels: 7
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                           Options.bloom_locality: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                               Options.ttl: 2592000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                       Options.enable_blob_files: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                           Options.min_blob_size: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: d5af94e8-ecd6-4cf4-8696-0e3800666e77
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764157160725713, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764157160725863, "job": 1, "event": "recovery_finished"}
Nov 26 11:39:20 compute-0 ceph-osd[88091]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Nov 26 11:39:20 compute-0 ceph-osd[88091]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Nov 26 11:39:20 compute-0 ceph-osd[88091]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 26 11:39:20 compute-0 ceph-osd[88091]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: freelist init
Nov 26 11:39:20 compute-0 ceph-osd[88091]: freelist _read_cfg
Nov 26 11:39:20 compute-0 ceph-osd[88091]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 26 11:39:20 compute-0 ceph-osd[88091]: bluefs umount
Nov 26 11:39:20 compute-0 ceph-osd[88091]: bdev(0x563a3e8e7400 /var/lib/ceph/osd/ceph-0/block) close
Nov 26 11:39:20 compute-0 ceph-osd[88091]: bdev(0x563a3e8e7400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 26 11:39:20 compute-0 ceph-osd[88091]: bdev(0x563a3e8e7400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 26 11:39:20 compute-0 ceph-osd[88091]: bdev(0x563a3e8e7400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 11:39:20 compute-0 ceph-osd[88091]: bdev(0x563a3e8e7400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 11:39:20 compute-0 ceph-osd[88091]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 26 11:39:20 compute-0 ceph-osd[88091]: bluefs mount
Nov 26 11:39:20 compute-0 ceph-osd[88091]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: bluefs mount shared_bdev_used = 4718592
Nov 26 11:39:20 compute-0 ceph-osd[88091]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: RocksDB version: 7.9.2
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Git sha 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: DB SUMMARY
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: DB Session ID:  14RW6K26HQ481YNT15KT
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: CURRENT file:  CURRENT
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: IDENTITY file:  IDENTITY
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                         Options.error_if_exists: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                       Options.create_if_missing: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                         Options.paranoid_checks: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                                     Options.env: 0x563a3dbff960
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                                Options.info_log: 0x563a3daaa600
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.max_file_opening_threads: 16
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                              Options.statistics: (nil)
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                               Options.use_fsync: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                       Options.max_log_file_size: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                         Options.allow_fallocate: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                        Options.use_direct_reads: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.create_missing_column_families: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                              Options.db_log_dir: 
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                                 Options.wal_dir: db.wal
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.advise_random_on_open: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                    Options.write_buffer_manager: 0x563a3e9c0460
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                            Options.rate_limiter: (nil)
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.unordered_write: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                               Options.row_cache: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                              Options.wal_filter: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.allow_ingest_behind: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.two_write_queues: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.manual_wal_flush: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.wal_compression: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.atomic_flush: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                 Options.log_readahead_size: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.allow_data_in_errors: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.db_host_id: __hostname__
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.max_background_jobs: 4
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.max_background_compactions: -1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.max_subcompactions: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                          Options.max_open_files: -1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                          Options.bytes_per_sync: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.max_background_flushes: -1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Compression algorithms supported:
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         kZSTD supported: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         kXpressCompression supported: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         kBZip2Compression supported: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         kLZ4Compression supported: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         kZlibCompression supported: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         kLZ4HCCompression supported: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         kSnappyCompression supported: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.compaction_filter: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563a3daaaa20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563a3da971f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.compression: LZ4
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.num_levels: 7
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                           Options.bloom_locality: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                               Options.ttl: 2592000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                       Options.enable_blob_files: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                           Options.min_blob_size: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:           Options.merge_operator: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.compaction_filter: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563a3daaaa20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563a3da971f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.compression: LZ4
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.num_levels: 7
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                           Options.bloom_locality: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                               Options.ttl: 2592000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                       Options.enable_blob_files: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                           Options.min_blob_size: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:           Options.merge_operator: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.compaction_filter: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563a3daaaa20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563a3da971f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.compression: LZ4
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.num_levels: 7
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                           Options.bloom_locality: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                               Options.ttl: 2592000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                       Options.enable_blob_files: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                           Options.min_blob_size: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:           Options.merge_operator: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.compaction_filter: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563a3daaaa20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563a3da971f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.compression: LZ4
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.num_levels: 7
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                           Options.bloom_locality: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                               Options.ttl: 2592000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                       Options.enable_blob_files: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                           Options.min_blob_size: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:           Options.merge_operator: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.compaction_filter: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563a3daaaa20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563a3da971f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.compression: LZ4
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.num_levels: 7
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                           Options.bloom_locality: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                               Options.ttl: 2592000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                       Options.enable_blob_files: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                           Options.min_blob_size: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:           Options.merge_operator: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.compaction_filter: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563a3daaaa20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563a3da971f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.compression: LZ4
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.num_levels: 7
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                           Options.bloom_locality: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                               Options.ttl: 2592000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                       Options.enable_blob_files: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                           Options.min_blob_size: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:           Options.merge_operator: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.compaction_filter: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563a3daaaa20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563a3da971f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.compression: LZ4
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.num_levels: 7
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                           Options.bloom_locality: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                               Options.ttl: 2592000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                       Options.enable_blob_files: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                           Options.min_blob_size: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:           Options.merge_operator: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.compaction_filter: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563a3daaa380)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563a3da97090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.compression: LZ4
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.num_levels: 7
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                           Options.bloom_locality: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                               Options.ttl: 2592000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                       Options.enable_blob_files: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                           Options.min_blob_size: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:           Options.merge_operator: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.compaction_filter: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563a3daaa380)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563a3da97090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.compression: LZ4
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.num_levels: 7
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                           Options.bloom_locality: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                               Options.ttl: 2592000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                       Options.enable_blob_files: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                           Options.min_blob_size: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:           Options.merge_operator: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.compaction_filter: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563a3daaa380)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563a3da97090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.compression: LZ4
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.num_levels: 7
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                           Options.bloom_locality: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                               Options.ttl: 2592000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                       Options.enable_blob_files: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                           Options.min_blob_size: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: d5af94e8-ecd6-4cf4-8696-0e3800666e77
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764157160983520, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764157160985528, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764157160, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d5af94e8-ecd6-4cf4-8696-0e3800666e77", "db_session_id": "14RW6K26HQ481YNT15KT", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764157160986500, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764157160, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d5af94e8-ecd6-4cf4-8696-0e3800666e77", "db_session_id": "14RW6K26HQ481YNT15KT", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764157160987435, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764157160, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d5af94e8-ecd6-4cf4-8696-0e3800666e77", "db_session_id": "14RW6K26HQ481YNT15KT", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764157160987950, "job": 1, "event": "recovery_finished"}
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x563a3dc04000
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: DB pointer 0x563a3e9a9a00
Nov 26 11:39:20 compute-0 ceph-osd[88091]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 26 11:39:20 compute-0 ceph-osd[88091]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Nov 26 11:39:20 compute-0 ceph-osd[88091]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 11:39:20 compute-0 ceph-osd[88091]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a3da971f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a3da971f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a3da971f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a3da971f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.7      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.7      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.7      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.7      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.04 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.04 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a3da971f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a3da971f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a3da971f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a3da97090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 3e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a3da97090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 3e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.3      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.3      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.3      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.3      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a3da97090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 3e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a3da971f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a3da971f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 26 11:39:20 compute-0 ceph-osd[88091]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 26 11:39:20 compute-0 ceph-osd[88091]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 26 11:39:21 compute-0 ceph-osd[88091]: _get_class not permitted to load lua
Nov 26 11:39:21 compute-0 ceph-osd[88091]: _get_class not permitted to load sdk
Nov 26 11:39:21 compute-0 ceph-osd[88091]: _get_class not permitted to load test_remote_reads
Nov 26 11:39:21 compute-0 ceph-osd[88091]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 26 11:39:21 compute-0 ceph-osd[88091]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 26 11:39:21 compute-0 ceph-osd[88091]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 26 11:39:21 compute-0 ceph-osd[88091]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 26 11:39:21 compute-0 ceph-osd[88091]: osd.0 0 load_pgs
Nov 26 11:39:21 compute-0 ceph-osd[88091]: osd.0 0 load_pgs opened 0 pgs
Nov 26 11:39:21 compute-0 ceph-osd[88091]: osd.0 0 log_to_monitors true
Nov 26 11:39:21 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-0[88087]: 2025-11-26T11:39:21.003+0000 7efdd3615740 -1 osd.0 0 log_to_monitors true
Nov 26 11:39:21 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0) v1
Nov 26 11:39:21 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1602342407,v1:192.168.122.100:6803/1602342407]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Nov 26 11:39:21 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-1-activate-test[88309]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Nov 26 11:39:21 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-1-activate-test[88309]:                             [--no-systemd] [--no-tmpfs]
Nov 26 11:39:21 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-1-activate-test[88309]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 26 11:39:21 compute-0 systemd[1]: libpod-6d3642fbc4fc959f232c060b0e1f80e1b17b7e924339fe178176aa9006a8377d.scope: Deactivated successfully.
Nov 26 11:39:21 compute-0 podman[88295]: 2025-11-26 11:39:21.128345875 +0000 UTC m=+0.638743586 container died 6d3642fbc4fc959f232c060b0e1f80e1b17b7e924339fe178176aa9006a8377d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-1-activate-test, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:39:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-d3a17b857afd7a3a68f4d6759e17881b529412195fe6743ec2f43ef99ada76d7-merged.mount: Deactivated successfully.
Nov 26 11:39:21 compute-0 podman[88295]: 2025-11-26 11:39:21.160444716 +0000 UTC m=+0.670842408 container remove 6d3642fbc4fc959f232c060b0e1f80e1b17b7e924339fe178176aa9006a8377d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-1-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 26 11:39:21 compute-0 systemd[1]: libpod-conmon-6d3642fbc4fc959f232c060b0e1f80e1b17b7e924339fe178176aa9006a8377d.scope: Deactivated successfully.
Nov 26 11:39:21 compute-0 systemd[1]: Reloading.
Nov 26 11:39:21 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 11:39:21 compute-0 systemd-sysv-generator[88775]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:39:21 compute-0 systemd-rc-local-generator[88772]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:39:21 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:39:21 compute-0 systemd[1]: Reloading.
Nov 26 11:39:21 compute-0 systemd-rc-local-generator[88815]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:39:21 compute-0 systemd-sysv-generator[88821]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:39:21 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Nov 26 11:39:21 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 26 11:39:21 compute-0 ceph-mon[74928]: Deploying daemon osd.1 on compute-0
Nov 26 11:39:21 compute-0 ceph-mon[74928]: from='osd.0 [v2:192.168.122.100:6802/1602342407,v1:192.168.122.100:6803/1602342407]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Nov 26 11:39:21 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1602342407,v1:192.168.122.100:6803/1602342407]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Nov 26 11:39:21 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e7 e7: 3 total, 0 up, 3 in
Nov 26 11:39:21 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e7: 3 total, 0 up, 3 in
Nov 26 11:39:21 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Nov 26 11:39:21 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1602342407,v1:192.168.122.100:6803/1602342407]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 26 11:39:21 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.0195 at location {host=compute-0,root=default}
Nov 26 11:39:21 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 26 11:39:21 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 26 11:39:21 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 26 11:39:21 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 26 11:39:21 compute-0 ceph-mgr[75197]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 26 11:39:21 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 26 11:39:21 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 11:39:21 compute-0 ceph-mgr[75197]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 26 11:39:21 compute-0 ceph-mgr[75197]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 26 11:39:21 compute-0 systemd[1]: Starting Ceph osd.1 for ebab460c-3fd7-5f66-aa87-e10c143123f7...
Nov 26 11:39:21 compute-0 podman[88867]: 2025-11-26 11:39:21.916508883 +0000 UTC m=+0.025262835 container create 21bdf9aae3508b0a05cc1bf66685e505831804721fbf74a3319353d86ea720cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-1-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 26 11:39:21 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:39:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78fb568b2051dffeec5df32ebd9cc35f0ee62ab2a399d37812c6cfe4d040e228/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78fb568b2051dffeec5df32ebd9cc35f0ee62ab2a399d37812c6cfe4d040e228/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78fb568b2051dffeec5df32ebd9cc35f0ee62ab2a399d37812c6cfe4d040e228/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78fb568b2051dffeec5df32ebd9cc35f0ee62ab2a399d37812c6cfe4d040e228/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78fb568b2051dffeec5df32ebd9cc35f0ee62ab2a399d37812c6cfe4d040e228/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:21 compute-0 podman[88867]: 2025-11-26 11:39:21.965977643 +0000 UTC m=+0.074731604 container init 21bdf9aae3508b0a05cc1bf66685e505831804721fbf74a3319353d86ea720cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-1-activate, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:39:21 compute-0 podman[88867]: 2025-11-26 11:39:21.970009785 +0000 UTC m=+0.078763735 container start 21bdf9aae3508b0a05cc1bf66685e505831804721fbf74a3319353d86ea720cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-1-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:39:21 compute-0 podman[88867]: 2025-11-26 11:39:21.971094657 +0000 UTC m=+0.079848638 container attach 21bdf9aae3508b0a05cc1bf66685e505831804721fbf74a3319353d86ea720cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-1-activate, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 26 11:39:22 compute-0 podman[88867]: 2025-11-26 11:39:21.905887686 +0000 UTC m=+0.014641657 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:39:22 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 26 11:39:22 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 26 11:39:22 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Nov 26 11:39:22 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 26 11:39:22 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1602342407,v1:192.168.122.100:6803/1602342407]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 26 11:39:22 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e8 e8: 3 total, 0 up, 3 in
Nov 26 11:39:22 compute-0 ceph-osd[88091]: osd.0 0 done with init, starting boot process
Nov 26 11:39:22 compute-0 ceph-osd[88091]: osd.0 0 start_boot
Nov 26 11:39:22 compute-0 ceph-osd[88091]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 26 11:39:22 compute-0 ceph-osd[88091]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 26 11:39:22 compute-0 ceph-osd[88091]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 26 11:39:22 compute-0 ceph-osd[88091]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 26 11:39:22 compute-0 ceph-osd[88091]: osd.0 0  bench count 12288000 bsize 4 KiB
Nov 26 11:39:22 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e8: 3 total, 0 up, 3 in
Nov 26 11:39:22 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 26 11:39:22 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 26 11:39:22 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 26 11:39:22 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 26 11:39:22 compute-0 ceph-mgr[75197]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 26 11:39:22 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 26 11:39:22 compute-0 ceph-mgr[75197]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 26 11:39:22 compute-0 ceph-mgr[75197]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1602342407; not ready for session (expect reconnect)
Nov 26 11:39:22 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 11:39:22 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 26 11:39:22 compute-0 ceph-mgr[75197]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 26 11:39:22 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 26 11:39:22 compute-0 ceph-mgr[75197]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 26 11:39:22 compute-0 ceph-mon[74928]: pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 11:39:22 compute-0 ceph-mon[74928]: from='osd.0 [v2:192.168.122.100:6802/1602342407,v1:192.168.122.100:6803/1602342407]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Nov 26 11:39:22 compute-0 ceph-mon[74928]: osdmap e7: 3 total, 0 up, 3 in
Nov 26 11:39:22 compute-0 ceph-mon[74928]: from='osd.0 [v2:192.168.122.100:6802/1602342407,v1:192.168.122.100:6803/1602342407]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 26 11:39:22 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 26 11:39:22 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 26 11:39:22 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 11:39:22 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-1-activate[88880]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 26 11:39:22 compute-0 bash[88867]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 26 11:39:22 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-1-activate[88880]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Nov 26 11:39:22 compute-0 bash[88867]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Nov 26 11:39:22 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-1-activate[88880]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Nov 26 11:39:22 compute-0 bash[88867]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Nov 26 11:39:22 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-1-activate[88880]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 26 11:39:22 compute-0 bash[88867]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 26 11:39:22 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-1-activate[88880]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 26 11:39:22 compute-0 bash[88867]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 26 11:39:22 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-1-activate[88880]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 26 11:39:22 compute-0 bash[88867]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 26 11:39:22 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-1-activate[88880]: --> ceph-volume raw activate successful for osd ID: 1
Nov 26 11:39:22 compute-0 bash[88867]: --> ceph-volume raw activate successful for osd ID: 1
Nov 26 11:39:22 compute-0 systemd[1]: libpod-21bdf9aae3508b0a05cc1bf66685e505831804721fbf74a3319353d86ea720cb.scope: Deactivated successfully.
Nov 26 11:39:22 compute-0 conmon[88880]: conmon 21bdf9aae3508b0a05cc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-21bdf9aae3508b0a05cc1bf66685e505831804721fbf74a3319353d86ea720cb.scope/container/memory.events
Nov 26 11:39:22 compute-0 podman[88867]: 2025-11-26 11:39:22.800421051 +0000 UTC m=+0.909175012 container died 21bdf9aae3508b0a05cc1bf66685e505831804721fbf74a3319353d86ea720cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-1-activate, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 26 11:39:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-78fb568b2051dffeec5df32ebd9cc35f0ee62ab2a399d37812c6cfe4d040e228-merged.mount: Deactivated successfully.
Nov 26 11:39:22 compute-0 podman[88867]: 2025-11-26 11:39:22.941677022 +0000 UTC m=+1.050430974 container remove 21bdf9aae3508b0a05cc1bf66685e505831804721fbf74a3319353d86ea720cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-1-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:39:23 compute-0 podman[89058]: 2025-11-26 11:39:23.150591386 +0000 UTC m=+0.085097144 container create 6de4753530621e0f730a3b8a4f76c0440389b7c241149bfa7593ca1b6509f605 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 26 11:39:23 compute-0 podman[89058]: 2025-11-26 11:39:23.081819547 +0000 UTC m=+0.016325324 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:39:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/432dac1be3b10683cd8d87b81041db789925a3f8d5005c710bdb4bae7a37a1e5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/432dac1be3b10683cd8d87b81041db789925a3f8d5005c710bdb4bae7a37a1e5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/432dac1be3b10683cd8d87b81041db789925a3f8d5005c710bdb4bae7a37a1e5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/432dac1be3b10683cd8d87b81041db789925a3f8d5005c710bdb4bae7a37a1e5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/432dac1be3b10683cd8d87b81041db789925a3f8d5005c710bdb4bae7a37a1e5/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:23 compute-0 podman[89058]: 2025-11-26 11:39:23.236542199 +0000 UTC m=+0.171047966 container init 6de4753530621e0f730a3b8a4f76c0440389b7c241149bfa7593ca1b6509f605 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:39:23 compute-0 podman[89058]: 2025-11-26 11:39:23.242163006 +0000 UTC m=+0.176668763 container start 6de4753530621e0f730a3b8a4f76c0440389b7c241149bfa7593ca1b6509f605 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Nov 26 11:39:23 compute-0 bash[89058]: 6de4753530621e0f730a3b8a4f76c0440389b7c241149bfa7593ca1b6509f605
Nov 26 11:39:23 compute-0 systemd[1]: Started Ceph osd.1 for ebab460c-3fd7-5f66-aa87-e10c143123f7.
Nov 26 11:39:23 compute-0 sudo[88179]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:23 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 11:39:23 compute-0 ceph-osd[89074]: set uid:gid to 167:167 (ceph:ceph)
Nov 26 11:39:23 compute-0 ceph-osd[89074]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Nov 26 11:39:23 compute-0 ceph-osd[89074]: pidfile_write: ignore empty --pid-file
Nov 26 11:39:23 compute-0 ceph-osd[89074]: bdev(0x55c969637800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 26 11:39:23 compute-0 ceph-osd[89074]: bdev(0x55c969637800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 26 11:39:23 compute-0 ceph-osd[89074]: bdev(0x55c969637800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 11:39:23 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:23 compute-0 ceph-osd[89074]: bdev(0x55c969637800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 11:39:23 compute-0 ceph-osd[89074]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 26 11:39:23 compute-0 ceph-osd[89074]: bdev(0x55c96a46f800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 26 11:39:23 compute-0 ceph-osd[89074]: bdev(0x55c96a46f800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 26 11:39:23 compute-0 ceph-osd[89074]: bdev(0x55c96a46f800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 11:39:23 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 11:39:23 compute-0 ceph-osd[89074]: bdev(0x55c96a46f800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 11:39:23 compute-0 ceph-osd[89074]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 26 11:39:23 compute-0 ceph-osd[89074]: bdev(0x55c96a46f800 /var/lib/ceph/osd/ceph-1/block) close
Nov 26 11:39:23 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:23 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) v1
Nov 26 11:39:23 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Nov 26 11:39:23 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 11:39:23 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:39:23 compute-0 ceph-mgr[75197]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-0
Nov 26 11:39:23 compute-0 ceph-mgr[75197]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-0
Nov 26 11:39:23 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 11:39:23 compute-0 sudo[89087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:39:23 compute-0 sudo[89087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:23 compute-0 sudo[89087]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:23 compute-0 sudo[89112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:39:23 compute-0 sudo[89112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:23 compute-0 sudo[89112]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:23 compute-0 sudo[89137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:39:23 compute-0 sudo[89137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:23 compute-0 sudo[89137]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:23 compute-0 sudo[89162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7
Nov 26 11:39:23 compute-0 sudo[89162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:23 compute-0 ceph-osd[89074]: bdev(0x55c969637800 /var/lib/ceph/osd/ceph-1/block) close
Nov 26 11:39:23 compute-0 ceph-mgr[75197]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1602342407; not ready for session (expect reconnect)
Nov 26 11:39:23 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 26 11:39:23 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 26 11:39:23 compute-0 ceph-mgr[75197]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 26 11:39:23 compute-0 ceph-mon[74928]: from='osd.0 [v2:192.168.122.100:6802/1602342407,v1:192.168.122.100:6803/1602342407]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 26 11:39:23 compute-0 ceph-mon[74928]: osdmap e8: 3 total, 0 up, 3 in
Nov 26 11:39:23 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 26 11:39:23 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 26 11:39:23 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 11:39:23 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 26 11:39:23 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:23 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:23 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Nov 26 11:39:23 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:39:23 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 26 11:39:23 compute-0 podman[89224]: 2025-11-26 11:39:23.775911471 +0000 UTC m=+0.028664963 container create c46cbcbd631318566729496b9ed85228ea25c1ef7325dc09d6da9176f3cf619d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hopper, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:39:23 compute-0 systemd[1]: Started libpod-conmon-c46cbcbd631318566729496b9ed85228ea25c1ef7325dc09d6da9176f3cf619d.scope.
Nov 26 11:39:23 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:39:23 compute-0 podman[89224]: 2025-11-26 11:39:23.846326039 +0000 UTC m=+0.099079541 container init c46cbcbd631318566729496b9ed85228ea25c1ef7325dc09d6da9176f3cf619d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hopper, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:39:23 compute-0 podman[89224]: 2025-11-26 11:39:23.852520601 +0000 UTC m=+0.105274083 container start c46cbcbd631318566729496b9ed85228ea25c1ef7325dc09d6da9176f3cf619d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Nov 26 11:39:23 compute-0 podman[89224]: 2025-11-26 11:39:23.855614692 +0000 UTC m=+0.108368194 container attach c46cbcbd631318566729496b9ed85228ea25c1ef7325dc09d6da9176f3cf619d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 26 11:39:23 compute-0 kind_hopper[89237]: 167 167
Nov 26 11:39:23 compute-0 systemd[1]: libpod-c46cbcbd631318566729496b9ed85228ea25c1ef7325dc09d6da9176f3cf619d.scope: Deactivated successfully.
Nov 26 11:39:23 compute-0 conmon[89237]: conmon c46cbcbd631318566729 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c46cbcbd631318566729496b9ed85228ea25c1ef7325dc09d6da9176f3cf619d.scope/container/memory.events
Nov 26 11:39:23 compute-0 podman[89224]: 2025-11-26 11:39:23.85834181 +0000 UTC m=+0.111095292 container died c46cbcbd631318566729496b9ed85228ea25c1ef7325dc09d6da9176f3cf619d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 26 11:39:23 compute-0 podman[89224]: 2025-11-26 11:39:23.76462904 +0000 UTC m=+0.017382542 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:39:23 compute-0 ceph-osd[89074]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Nov 26 11:39:23 compute-0 ceph-osd[89074]: load: jerasure load: lrc 
Nov 26 11:39:23 compute-0 ceph-osd[89074]: bdev(0x55c96a4f0c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 26 11:39:23 compute-0 ceph-osd[89074]: bdev(0x55c96a4f0c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 26 11:39:23 compute-0 ceph-osd[89074]: bdev(0x55c96a4f0c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 11:39:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-49ffcf3b088e288f85d8c7ef9ff50151ac8c5ab02f861ad907fd550d6fe4bd0b-merged.mount: Deactivated successfully.
Nov 26 11:39:23 compute-0 ceph-osd[89074]: bdev(0x55c96a4f0c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 11:39:23 compute-0 ceph-osd[89074]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 26 11:39:23 compute-0 ceph-osd[89074]: bdev(0x55c96a4f0c00 /var/lib/ceph/osd/ceph-1/block) close
Nov 26 11:39:23 compute-0 podman[89224]: 2025-11-26 11:39:23.885219739 +0000 UTC m=+0.137973221 container remove c46cbcbd631318566729496b9ed85228ea25c1ef7325dc09d6da9176f3cf619d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hopper, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:39:23 compute-0 systemd[1]: libpod-conmon-c46cbcbd631318566729496b9ed85228ea25c1ef7325dc09d6da9176f3cf619d.scope: Deactivated successfully.
Nov 26 11:39:24 compute-0 podman[89272]: 2025-11-26 11:39:24.079900667 +0000 UTC m=+0.041303414 container create 2ad38f44f26398cae5a5c9028af1dd9a5d292678adafbc369eaadc3be1a32f86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-2-activate-test, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:39:24 compute-0 ceph-osd[89074]: bdev(0x55c96a4f0c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 26 11:39:24 compute-0 ceph-osd[89074]: bdev(0x55c96a4f0c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 26 11:39:24 compute-0 ceph-osd[89074]: bdev(0x55c96a4f0c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 11:39:24 compute-0 ceph-osd[89074]: bdev(0x55c96a4f0c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 11:39:24 compute-0 ceph-osd[89074]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 26 11:39:24 compute-0 ceph-osd[89074]: bdev(0x55c96a4f0c00 /var/lib/ceph/osd/ceph-1/block) close
Nov 26 11:39:24 compute-0 podman[89272]: 2025-11-26 11:39:24.063057164 +0000 UTC m=+0.024459931 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 26 11:39:24 compute-0 ceph-osd[89074]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 26 11:39:24 compute-0 systemd[1]: Started libpod-conmon-2ad38f44f26398cae5a5c9028af1dd9a5d292678adafbc369eaadc3be1a32f86.scope.
Nov 26 11:39:24 compute-0 ceph-osd[89074]: bdev(0x55c96a4f0c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 26 11:39:24 compute-0 ceph-osd[89074]: bdev(0x55c96a4f0c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 26 11:39:24 compute-0 ceph-osd[89074]: bdev(0x55c96a4f0c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 11:39:24 compute-0 ceph-osd[89074]: bdev(0x55c96a4f0c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 11:39:24 compute-0 ceph-osd[89074]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 26 11:39:24 compute-0 ceph-osd[89074]: bdev(0x55c96a4f1400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 26 11:39:24 compute-0 ceph-osd[89074]: bdev(0x55c96a4f1400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 26 11:39:24 compute-0 ceph-osd[89074]: bdev(0x55c96a4f1400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 11:39:24 compute-0 ceph-osd[89074]: bdev(0x55c96a4f1400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 11:39:24 compute-0 ceph-osd[89074]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 26 11:39:24 compute-0 ceph-osd[89074]: bluefs mount
Nov 26 11:39:24 compute-0 ceph-osd[89074]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: bluefs mount shared_bdev_used = 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 26 11:39:24 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:39:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5193552d67b8f20cf4e976978fd9acf8a1be1742ac5a7c3aa1b0b99a8688a3c9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5193552d67b8f20cf4e976978fd9acf8a1be1742ac5a7c3aa1b0b99a8688a3c9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5193552d67b8f20cf4e976978fd9acf8a1be1742ac5a7c3aa1b0b99a8688a3c9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5193552d67b8f20cf4e976978fd9acf8a1be1742ac5a7c3aa1b0b99a8688a3c9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5193552d67b8f20cf4e976978fd9acf8a1be1742ac5a7c3aa1b0b99a8688a3c9/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: RocksDB version: 7.9.2
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Git sha 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: DB SUMMARY
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: DB Session ID:  WGMSKDMSX7S133X3M3JW
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: CURRENT file:  CURRENT
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: IDENTITY file:  IDENTITY
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                         Options.error_if_exists: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                       Options.create_if_missing: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                         Options.paranoid_checks: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                                     Options.env: 0x55c96a4c1c70
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                                Options.info_log: 0x55c9696be8a0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.max_file_opening_threads: 16
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                              Options.statistics: (nil)
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                               Options.use_fsync: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                       Options.max_log_file_size: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                         Options.allow_fallocate: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 26 11:39:24 compute-0 podman[89272]: 2025-11-26 11:39:24.190619379 +0000 UTC m=+0.152022125 container init 2ad38f44f26398cae5a5c9028af1dd9a5d292678adafbc369eaadc3be1a32f86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-2-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                        Options.use_direct_reads: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.create_missing_column_families: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                              Options.db_log_dir: 
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                                 Options.wal_dir: db.wal
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.advise_random_on_open: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                    Options.write_buffer_manager: 0x55c96a5dc460
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                            Options.rate_limiter: (nil)
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.unordered_write: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                               Options.row_cache: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                              Options.wal_filter: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.allow_ingest_behind: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.two_write_queues: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.manual_wal_flush: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.wal_compression: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.atomic_flush: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                 Options.log_readahead_size: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 26 11:39:24 compute-0 podman[89272]: 2025-11-26 11:39:24.197990638 +0000 UTC m=+0.159393385 container start 2ad38f44f26398cae5a5c9028af1dd9a5d292678adafbc369eaadc3be1a32f86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-2-activate-test, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:39:24 compute-0 podman[89272]: 2025-11-26 11:39:24.201022319 +0000 UTC m=+0.162425066 container attach 2ad38f44f26398cae5a5c9028af1dd9a5d292678adafbc369eaadc3be1a32f86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-2-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.allow_data_in_errors: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.db_host_id: __hostname__
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.max_background_jobs: 4
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.max_background_compactions: -1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.max_subcompactions: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                          Options.max_open_files: -1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                          Options.bytes_per_sync: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.max_background_flushes: -1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Compression algorithms supported:
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         kZSTD supported: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         kXpressCompression supported: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         kBZip2Compression supported: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         kLZ4Compression supported: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         kZlibCompression supported: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         kLZ4HCCompression supported: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         kSnappyCompression supported: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.compaction_filter: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c9696be2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c9696ab1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.compression: LZ4
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.num_levels: 7
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                           Options.bloom_locality: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                               Options.ttl: 2592000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                       Options.enable_blob_files: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                           Options.min_blob_size: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:           Options.merge_operator: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.compaction_filter: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c9696be2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c9696ab1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.compression: LZ4
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.num_levels: 7
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                           Options.bloom_locality: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                               Options.ttl: 2592000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                       Options.enable_blob_files: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                           Options.min_blob_size: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:           Options.merge_operator: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.compaction_filter: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c9696be2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c9696ab1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.compression: LZ4
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.num_levels: 7
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                           Options.bloom_locality: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                               Options.ttl: 2592000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                       Options.enable_blob_files: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                           Options.min_blob_size: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:           Options.merge_operator: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.compaction_filter: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c9696be2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c9696ab1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.compression: LZ4
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.num_levels: 7
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                           Options.bloom_locality: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                               Options.ttl: 2592000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                       Options.enable_blob_files: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                           Options.min_blob_size: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:           Options.merge_operator: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.compaction_filter: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c9696be2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c9696ab1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.compression: LZ4
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.num_levels: 7
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                           Options.bloom_locality: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                               Options.ttl: 2592000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                       Options.enable_blob_files: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                           Options.min_blob_size: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:           Options.merge_operator: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.compaction_filter: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c9696be2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c9696ab1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.compression: LZ4
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.num_levels: 7
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                           Options.bloom_locality: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                               Options.ttl: 2592000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                       Options.enable_blob_files: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                           Options.min_blob_size: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:           Options.merge_operator: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.compaction_filter: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c9696be2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c9696ab1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.compression: LZ4
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.num_levels: 7
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                           Options.bloom_locality: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                               Options.ttl: 2592000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                       Options.enable_blob_files: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                           Options.min_blob_size: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:           Options.merge_operator: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.compaction_filter: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c9696be240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c9696ab090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.compression: LZ4
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.num_levels: 7
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                           Options.bloom_locality: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                               Options.ttl: 2592000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                       Options.enable_blob_files: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                           Options.min_blob_size: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:           Options.merge_operator: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.compaction_filter: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c9696be240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c9696ab090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.compression: LZ4
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.num_levels: 7
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                           Options.bloom_locality: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                               Options.ttl: 2592000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                       Options.enable_blob_files: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                           Options.min_blob_size: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:           Options.merge_operator: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.compaction_filter: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c9696be240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c9696ab090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.compression: LZ4
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.num_levels: 7
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                           Options.bloom_locality: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                               Options.ttl: 2592000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                       Options.enable_blob_files: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                           Options.min_blob_size: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 5ab424a0-fe12-434c-a3a0-a5f7344e73d6
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764157164225732, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764157164225976, "job": 1, "event": "recovery_finished"}
Nov 26 11:39:24 compute-0 ceph-osd[89074]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Nov 26 11:39:24 compute-0 ceph-osd[89074]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Nov 26 11:39:24 compute-0 ceph-osd[89074]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 26 11:39:24 compute-0 ceph-osd[89074]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: freelist init
Nov 26 11:39:24 compute-0 ceph-osd[89074]: freelist _read_cfg
Nov 26 11:39:24 compute-0 ceph-osd[89074]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 26 11:39:24 compute-0 ceph-osd[89074]: bluefs umount
Nov 26 11:39:24 compute-0 ceph-osd[89074]: bdev(0x55c96a4f1400 /var/lib/ceph/osd/ceph-1/block) close
Nov 26 11:39:24 compute-0 ceph-osd[88091]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 70.456 iops: 18036.830 elapsed_sec: 0.166
Nov 26 11:39:24 compute-0 ceph-osd[88091]: log_channel(cluster) log [WRN] : OSD bench result of 18036.829510 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 26 11:39:24 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-0[88087]: 2025-11-26T11:39:24.318+0000 7efdcf595640 -1 osd.0 0 waiting for initial osdmap
Nov 26 11:39:24 compute-0 ceph-osd[88091]: osd.0 0 waiting for initial osdmap
Nov 26 11:39:24 compute-0 ceph-osd[88091]: osd.0 8 crush map has features 288514050185494528, adjusting msgr requires for clients
Nov 26 11:39:24 compute-0 ceph-osd[88091]: osd.0 8 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Nov 26 11:39:24 compute-0 ceph-osd[88091]: osd.0 8 crush map has features 3314932999778484224, adjusting msgr requires for osds
Nov 26 11:39:24 compute-0 ceph-osd[88091]: osd.0 8 check_osdmap_features require_osd_release unknown -> reef
Nov 26 11:39:24 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-0[88087]: 2025-11-26T11:39:24.333+0000 7efdcabbd640 -1 osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 26 11:39:24 compute-0 ceph-osd[88091]: osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 26 11:39:24 compute-0 ceph-osd[88091]: osd.0 8 set_numa_affinity not setting numa affinity
Nov 26 11:39:24 compute-0 ceph-osd[88091]: osd.0 8 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Nov 26 11:39:24 compute-0 ceph-osd[89074]: bdev(0x55c96a4f1400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 26 11:39:24 compute-0 ceph-osd[89074]: bdev(0x55c96a4f1400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 26 11:39:24 compute-0 ceph-osd[89074]: bdev(0x55c96a4f1400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 11:39:24 compute-0 ceph-osd[89074]: bdev(0x55c96a4f1400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 11:39:24 compute-0 ceph-osd[89074]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 26 11:39:24 compute-0 ceph-osd[89074]: bluefs mount
Nov 26 11:39:24 compute-0 ceph-osd[89074]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: bluefs mount shared_bdev_used = 4718592
Nov 26 11:39:24 compute-0 ceph-osd[89074]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: RocksDB version: 7.9.2
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Git sha 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: DB SUMMARY
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: DB Session ID:  WGMSKDMSX7S133X3M3JX
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: CURRENT file:  CURRENT
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: IDENTITY file:  IDENTITY
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                         Options.error_if_exists: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                       Options.create_if_missing: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                         Options.paranoid_checks: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                                     Options.env: 0x55c96a6843f0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                                Options.info_log: 0x55c96a4bd500
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.max_file_opening_threads: 16
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                              Options.statistics: (nil)
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                               Options.use_fsync: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                       Options.max_log_file_size: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                         Options.allow_fallocate: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                        Options.use_direct_reads: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.create_missing_column_families: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                              Options.db_log_dir: 
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                                 Options.wal_dir: db.wal
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.advise_random_on_open: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                    Options.write_buffer_manager: 0x55c96a5dc6e0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                            Options.rate_limiter: (nil)
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.unordered_write: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                               Options.row_cache: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                              Options.wal_filter: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.allow_ingest_behind: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.two_write_queues: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.manual_wal_flush: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.wal_compression: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.atomic_flush: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                 Options.log_readahead_size: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.allow_data_in_errors: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.db_host_id: __hostname__
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.max_background_jobs: 4
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.max_background_compactions: -1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.max_subcompactions: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                          Options.max_open_files: -1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                          Options.bytes_per_sync: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.max_background_flushes: -1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Compression algorithms supported:
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         kZSTD supported: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         kXpressCompression supported: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         kBZip2Compression supported: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         kLZ4Compression supported: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         kZlibCompression supported: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         kLZ4HCCompression supported: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         kSnappyCompression supported: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.compaction_filter: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c969691160)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c9696ab1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.compression: LZ4
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.num_levels: 7
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                           Options.bloom_locality: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                               Options.ttl: 2592000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                       Options.enable_blob_files: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                           Options.min_blob_size: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:           Options.merge_operator: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.compaction_filter: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c969691160)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c9696ab1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.compression: LZ4
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.num_levels: 7
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                           Options.bloom_locality: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                               Options.ttl: 2592000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                       Options.enable_blob_files: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                           Options.min_blob_size: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:           Options.merge_operator: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.compaction_filter: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c969691160)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c9696ab1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.compression: LZ4
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.num_levels: 7
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                           Options.bloom_locality: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                               Options.ttl: 2592000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                       Options.enable_blob_files: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                           Options.min_blob_size: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:           Options.merge_operator: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.compaction_filter: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c969691160)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c9696ab1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.compression: LZ4
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.num_levels: 7
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                           Options.bloom_locality: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                               Options.ttl: 2592000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                       Options.enable_blob_files: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                           Options.min_blob_size: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:           Options.merge_operator: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.compaction_filter: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c969691160)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c9696ab1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.compression: LZ4
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.num_levels: 7
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                           Options.bloom_locality: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                               Options.ttl: 2592000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                       Options.enable_blob_files: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                           Options.min_blob_size: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:           Options.merge_operator: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.compaction_filter: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c969691160)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c9696ab1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.compression: LZ4
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.num_levels: 7
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                           Options.bloom_locality: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                               Options.ttl: 2592000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                       Options.enable_blob_files: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                           Options.min_blob_size: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:           Options.merge_operator: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.compaction_filter: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c969691160)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c9696ab1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.compression: LZ4
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.num_levels: 7
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                           Options.bloom_locality: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                               Options.ttl: 2592000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                       Options.enable_blob_files: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                           Options.min_blob_size: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:           Options.merge_operator: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.compaction_filter: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c969691060)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c9696ab090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.compression: LZ4
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.num_levels: 7
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                           Options.bloom_locality: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                               Options.ttl: 2592000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                       Options.enable_blob_files: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                           Options.min_blob_size: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:           Options.merge_operator: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.compaction_filter: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c969691060)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c9696ab090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.compression: LZ4
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.num_levels: 7
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                           Options.bloom_locality: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                               Options.ttl: 2592000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                       Options.enable_blob_files: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                           Options.min_blob_size: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:           Options.merge_operator: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.compaction_filter: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c969691060)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c9696ab090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.compression: LZ4
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.num_levels: 7
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                           Options.bloom_locality: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                               Options.ttl: 2592000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                       Options.enable_blob_files: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                           Options.min_blob_size: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 5ab424a0-fe12-434c-a3a0-a5f7344e73d6
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764157164467551, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764157164469885, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764157164, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5ab424a0-fe12-434c-a3a0-a5f7344e73d6", "db_session_id": "WGMSKDMSX7S133X3M3JX", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764157164471296, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764157164, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5ab424a0-fe12-434c-a3a0-a5f7344e73d6", "db_session_id": "WGMSKDMSX7S133X3M3JX", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764157164472570, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764157164, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5ab424a0-fe12-434c-a3a0-a5f7344e73d6", "db_session_id": "WGMSKDMSX7S133X3M3JX", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764157164473221, "job": 1, "event": "recovery_finished"}
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55c969819c00
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: DB pointer 0x55c96a5c5a00
Nov 26 11:39:24 compute-0 ceph-osd[89074]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 26 11:39:24 compute-0 ceph-osd[89074]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Nov 26 11:39:24 compute-0 ceph-osd[89074]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 11:39:24 compute-0 ceph-osd[89074]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c9696ab1f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c9696ab1f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c9696ab1f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c9696ab1f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.1      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.1      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.1      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.1      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.04 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.04 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c9696ab1f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c9696ab1f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c9696ab1f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c9696ab090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c9696ab090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.04 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.04 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c9696ab090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c9696ab1f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c9696ab1f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 26 11:39:24 compute-0 ceph-osd[89074]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 26 11:39:24 compute-0 ceph-osd[89074]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 26 11:39:24 compute-0 ceph-osd[89074]: _get_class not permitted to load lua
Nov 26 11:39:24 compute-0 ceph-osd[89074]: _get_class not permitted to load sdk
Nov 26 11:39:24 compute-0 ceph-osd[89074]: _get_class not permitted to load test_remote_reads
Nov 26 11:39:24 compute-0 ceph-osd[89074]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 26 11:39:24 compute-0 ceph-osd[89074]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 26 11:39:24 compute-0 ceph-osd[89074]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 26 11:39:24 compute-0 ceph-osd[89074]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 26 11:39:24 compute-0 ceph-osd[89074]: osd.1 0 load_pgs
Nov 26 11:39:24 compute-0 ceph-osd[89074]: osd.1 0 load_pgs opened 0 pgs
Nov 26 11:39:24 compute-0 ceph-osd[89074]: osd.1 0 log_to_monitors true
Nov 26 11:39:24 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-1[89070]: 2025-11-26T11:39:24.487+0000 7f40c14d8740 -1 osd.1 0 log_to_monitors true
Nov 26 11:39:24 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0) v1
Nov 26 11:39:24 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/734671324,v1:192.168.122.100:6807/734671324]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Nov 26 11:39:24 compute-0 ceph-mgr[75197]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1602342407; not ready for session (expect reconnect)
Nov 26 11:39:24 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 26 11:39:24 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 26 11:39:24 compute-0 ceph-mgr[75197]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 26 11:39:24 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Nov 26 11:39:24 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 26 11:39:24 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-2-activate-test[89298]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Nov 26 11:39:24 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-2-activate-test[89298]:                             [--no-systemd] [--no-tmpfs]
Nov 26 11:39:24 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-2-activate-test[89298]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 26 11:39:24 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/734671324,v1:192.168.122.100:6807/734671324]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Nov 26 11:39:24 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e9 e9: 3 total, 1 up, 3 in
Nov 26 11:39:24 compute-0 ceph-mon[74928]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/1602342407,v1:192.168.122.100:6803/1602342407] boot
Nov 26 11:39:24 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e9: 3 total, 1 up, 3 in
Nov 26 11:39:24 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 26 11:39:24 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 26 11:39:24 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Nov 26 11:39:24 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/734671324,v1:192.168.122.100:6807/734671324]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 26 11:39:24 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e9 create-or-move crush item name 'osd.1' initial_weight 0.0195 at location {host=compute-0,root=default}
Nov 26 11:39:24 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 26 11:39:24 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 26 11:39:24 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 26 11:39:24 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 11:39:24 compute-0 ceph-osd[88091]: osd.0 9 state: booting -> active
Nov 26 11:39:24 compute-0 ceph-mon[74928]: purged_snaps scrub starts
Nov 26 11:39:24 compute-0 ceph-mon[74928]: purged_snaps scrub ok
Nov 26 11:39:24 compute-0 ceph-mon[74928]: Deploying daemon osd.2 on compute-0
Nov 26 11:39:24 compute-0 ceph-mon[74928]: pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 11:39:24 compute-0 ceph-mon[74928]: from='osd.1 [v2:192.168.122.100:6806/734671324,v1:192.168.122.100:6807/734671324]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Nov 26 11:39:24 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 26 11:39:24 compute-0 ceph-mgr[75197]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 26 11:39:24 compute-0 ceph-mgr[75197]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 26 11:39:24 compute-0 systemd[1]: libpod-2ad38f44f26398cae5a5c9028af1dd9a5d292678adafbc369eaadc3be1a32f86.scope: Deactivated successfully.
Nov 26 11:39:24 compute-0 podman[89272]: 2025-11-26 11:39:24.757398274 +0000 UTC m=+0.718801022 container died 2ad38f44f26398cae5a5c9028af1dd9a5d292678adafbc369eaadc3be1a32f86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-2-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:39:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-5193552d67b8f20cf4e976978fd9acf8a1be1742ac5a7c3aa1b0b99a8688a3c9-merged.mount: Deactivated successfully.
Nov 26 11:39:24 compute-0 podman[89272]: 2025-11-26 11:39:24.787808891 +0000 UTC m=+0.749211638 container remove 2ad38f44f26398cae5a5c9028af1dd9a5d292678adafbc369eaadc3be1a32f86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-2-activate-test, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 26 11:39:24 compute-0 systemd[1]: libpod-conmon-2ad38f44f26398cae5a5c9028af1dd9a5d292678adafbc369eaadc3be1a32f86.scope: Deactivated successfully.
Nov 26 11:39:24 compute-0 systemd[1]: Reloading.
Nov 26 11:39:24 compute-0 systemd-rc-local-generator[89755]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:39:25 compute-0 systemd-sysv-generator[89758]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:39:25 compute-0 systemd[1]: Reloading.
Nov 26 11:39:25 compute-0 systemd-rc-local-generator[89796]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:39:25 compute-0 systemd-sysv-generator[89799]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:39:25 compute-0 systemd[1]: Starting Ceph osd.2 for ebab460c-3fd7-5f66-aa87-e10c143123f7...
Nov 26 11:39:25 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 11:39:25 compute-0 ceph-mgr[75197]: [devicehealth INFO root] creating mgr pool
Nov 26 11:39:25 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0) v1
Nov 26 11:39:25 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Nov 26 11:39:25 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 26 11:39:25 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 26 11:39:25 compute-0 podman[89855]: 2025-11-26 11:39:25.512077907 +0000 UTC m=+0.025616260 container create 18c676dedf0ee55e1d93e93ab298762fca7f5aee73b0831a904be1c87c222f1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-2-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:39:25 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:39:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0e0b1c72478a4ecd9b718bac0cbff4f7821a45ef6ef0753c2f3487596d8fe92/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0e0b1c72478a4ecd9b718bac0cbff4f7821a45ef6ef0753c2f3487596d8fe92/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0e0b1c72478a4ecd9b718bac0cbff4f7821a45ef6ef0753c2f3487596d8fe92/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0e0b1c72478a4ecd9b718bac0cbff4f7821a45ef6ef0753c2f3487596d8fe92/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0e0b1c72478a4ecd9b718bac0cbff4f7821a45ef6ef0753c2f3487596d8fe92/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:25 compute-0 podman[89855]: 2025-11-26 11:39:25.57131259 +0000 UTC m=+0.084850944 container init 18c676dedf0ee55e1d93e93ab298762fca7f5aee73b0831a904be1c87c222f1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-2-activate, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 26 11:39:25 compute-0 podman[89855]: 2025-11-26 11:39:25.57966266 +0000 UTC m=+0.093201013 container start 18c676dedf0ee55e1d93e93ab298762fca7f5aee73b0831a904be1c87c222f1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-2-activate, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 26 11:39:25 compute-0 podman[89855]: 2025-11-26 11:39:25.580878863 +0000 UTC m=+0.094417217 container attach 18c676dedf0ee55e1d93e93ab298762fca7f5aee73b0831a904be1c87c222f1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-2-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 26 11:39:25 compute-0 podman[89855]: 2025-11-26 11:39:25.501747245 +0000 UTC m=+0.015285617 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:39:25 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Nov 26 11:39:25 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 26 11:39:25 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/734671324,v1:192.168.122.100:6807/734671324]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 26 11:39:25 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Nov 26 11:39:25 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e10 e10: 3 total, 1 up, 3 in
Nov 26 11:39:25 compute-0 ceph-osd[89074]: osd.1 0 done with init, starting boot process
Nov 26 11:39:25 compute-0 ceph-osd[89074]: osd.1 0 start_boot
Nov 26 11:39:25 compute-0 ceph-osd[89074]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 26 11:39:25 compute-0 ceph-osd[89074]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 26 11:39:25 compute-0 ceph-osd[89074]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 26 11:39:25 compute-0 ceph-osd[89074]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 26 11:39:25 compute-0 ceph-osd[89074]: osd.1 0  bench count 12288000 bsize 4 KiB
Nov 26 11:39:25 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e10 crush map has features 3314933000852226048, adjusting msgr requires
Nov 26 11:39:25 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Nov 26 11:39:25 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Nov 26 11:39:25 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Nov 26 11:39:25 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e10: 3 total, 1 up, 3 in
Nov 26 11:39:25 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 26 11:39:25 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 26 11:39:25 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 26 11:39:25 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 11:39:25 compute-0 ceph-mgr[75197]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 26 11:39:25 compute-0 ceph-mgr[75197]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 26 11:39:25 compute-0 ceph-osd[88091]: osd.0 10 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 26 11:39:25 compute-0 ceph-osd[88091]: osd.0 10 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Nov 26 11:39:25 compute-0 ceph-osd[88091]: osd.0 10 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 26 11:39:25 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0) v1
Nov 26 11:39:25 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Nov 26 11:39:25 compute-0 ceph-mgr[75197]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/734671324; not ready for session (expect reconnect)
Nov 26 11:39:25 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 26 11:39:25 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 26 11:39:25 compute-0 ceph-mgr[75197]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 26 11:39:25 compute-0 ceph-mon[74928]: OSD bench result of 18036.829510 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 26 11:39:25 compute-0 ceph-mon[74928]: from='osd.1 [v2:192.168.122.100:6806/734671324,v1:192.168.122.100:6807/734671324]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Nov 26 11:39:25 compute-0 ceph-mon[74928]: osd.0 [v2:192.168.122.100:6802/1602342407,v1:192.168.122.100:6803/1602342407] boot
Nov 26 11:39:25 compute-0 ceph-mon[74928]: osdmap e9: 3 total, 1 up, 3 in
Nov 26 11:39:25 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 26 11:39:25 compute-0 ceph-mon[74928]: from='osd.1 [v2:192.168.122.100:6806/734671324,v1:192.168.122.100:6807/734671324]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 26 11:39:25 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 26 11:39:25 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 11:39:25 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Nov 26 11:39:25 compute-0 ceph-mon[74928]: from='osd.1 [v2:192.168.122.100:6806/734671324,v1:192.168.122.100:6807/734671324]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 26 11:39:25 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Nov 26 11:39:25 compute-0 ceph-mon[74928]: osdmap e10: 3 total, 1 up, 3 in
Nov 26 11:39:25 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 26 11:39:25 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 11:39:26 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-2-activate[89867]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 26 11:39:26 compute-0 bash[89855]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 26 11:39:26 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-2-activate[89867]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Nov 26 11:39:26 compute-0 bash[89855]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Nov 26 11:39:26 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-2-activate[89867]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Nov 26 11:39:26 compute-0 bash[89855]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Nov 26 11:39:26 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-2-activate[89867]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 26 11:39:26 compute-0 bash[89855]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 26 11:39:26 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-2-activate[89867]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 26 11:39:26 compute-0 bash[89855]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 26 11:39:26 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-2-activate[89867]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 26 11:39:26 compute-0 bash[89855]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 26 11:39:26 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-2-activate[89867]: --> ceph-volume raw activate successful for osd ID: 2
Nov 26 11:39:26 compute-0 bash[89855]: --> ceph-volume raw activate successful for osd ID: 2
Nov 26 11:39:26 compute-0 systemd[1]: libpod-18c676dedf0ee55e1d93e93ab298762fca7f5aee73b0831a904be1c87c222f1c.scope: Deactivated successfully.
Nov 26 11:39:26 compute-0 podman[89984]: 2025-11-26 11:39:26.431616885 +0000 UTC m=+0.016504416 container died 18c676dedf0ee55e1d93e93ab298762fca7f5aee73b0831a904be1c87c222f1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-2-activate, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 26 11:39:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-a0e0b1c72478a4ecd9b718bac0cbff4f7821a45ef6ef0753c2f3487596d8fe92-merged.mount: Deactivated successfully.
Nov 26 11:39:26 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e10 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:39:26 compute-0 podman[89984]: 2025-11-26 11:39:26.550647464 +0000 UTC m=+0.135534995 container remove 18c676dedf0ee55e1d93e93ab298762fca7f5aee73b0831a904be1c87c222f1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-2-activate, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:39:26 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Nov 26 11:39:26 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Nov 26 11:39:26 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e11 e11: 3 total, 1 up, 3 in
Nov 26 11:39:26 compute-0 podman[90031]: 2025-11-26 11:39:26.748140563 +0000 UTC m=+0.075130887 container create 6271fc17f1907f24996ce7d9a41a6b6b4278f9bcd1a1e185c01be78510d4ee54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:39:26 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e11: 3 total, 1 up, 3 in
Nov 26 11:39:26 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 26 11:39:26 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 26 11:39:26 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 26 11:39:26 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 11:39:26 compute-0 ceph-mgr[75197]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 26 11:39:26 compute-0 ceph-mgr[75197]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 26 11:39:26 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 26 11:39:26 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 26 11:39:26 compute-0 ceph-mgr[75197]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/734671324; not ready for session (expect reconnect)
Nov 26 11:39:26 compute-0 ceph-mgr[75197]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 26 11:39:26 compute-0 ceph-mon[74928]: pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 11:39:26 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Nov 26 11:39:26 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 26 11:39:26 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Nov 26 11:39:26 compute-0 ceph-mon[74928]: osdmap e11: 3 total, 1 up, 3 in
Nov 26 11:39:26 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 26 11:39:26 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 11:39:26 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 26 11:39:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17fd7a046edbe075ea25035d4ed961a1f584978cf71f011663d4250c602190ba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17fd7a046edbe075ea25035d4ed961a1f584978cf71f011663d4250c602190ba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17fd7a046edbe075ea25035d4ed961a1f584978cf71f011663d4250c602190ba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17fd7a046edbe075ea25035d4ed961a1f584978cf71f011663d4250c602190ba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17fd7a046edbe075ea25035d4ed961a1f584978cf71f011663d4250c602190ba/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:26 compute-0 podman[90031]: 2025-11-26 11:39:26.692751426 +0000 UTC m=+0.019741760 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:39:26 compute-0 podman[90031]: 2025-11-26 11:39:26.854313302 +0000 UTC m=+0.181303636 container init 6271fc17f1907f24996ce7d9a41a6b6b4278f9bcd1a1e185c01be78510d4ee54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:39:26 compute-0 podman[90031]: 2025-11-26 11:39:26.862007979 +0000 UTC m=+0.188998303 container start 6271fc17f1907f24996ce7d9a41a6b6b4278f9bcd1a1e185c01be78510d4ee54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:39:26 compute-0 bash[90031]: 6271fc17f1907f24996ce7d9a41a6b6b4278f9bcd1a1e185c01be78510d4ee54
Nov 26 11:39:26 compute-0 systemd[1]: Started Ceph osd.2 for ebab460c-3fd7-5f66-aa87-e10c143123f7.
Nov 26 11:39:26 compute-0 sudo[89162]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:26 compute-0 ceph-osd[90047]: set uid:gid to 167:167 (ceph:ceph)
Nov 26 11:39:26 compute-0 ceph-osd[90047]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Nov 26 11:39:26 compute-0 ceph-osd[90047]: pidfile_write: ignore empty --pid-file
Nov 26 11:39:26 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 11:39:26 compute-0 ceph-osd[90047]: bdev(0x55fea47dd800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 26 11:39:26 compute-0 ceph-osd[90047]: bdev(0x55fea47dd800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 26 11:39:26 compute-0 ceph-osd[90047]: bdev(0x55fea47dd800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 11:39:26 compute-0 ceph-osd[90047]: bdev(0x55fea47dd800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 11:39:26 compute-0 ceph-osd[90047]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 26 11:39:26 compute-0 ceph-osd[90047]: bdev(0x55fea5615000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 26 11:39:26 compute-0 ceph-osd[90047]: bdev(0x55fea5615000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 26 11:39:26 compute-0 ceph-osd[90047]: bdev(0x55fea5615000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 11:39:26 compute-0 ceph-osd[90047]: bdev(0x55fea5615000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 11:39:26 compute-0 ceph-osd[90047]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Nov 26 11:39:26 compute-0 ceph-osd[90047]: bdev(0x55fea5615000 /var/lib/ceph/osd/ceph-2/block) close
Nov 26 11:39:26 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:26 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 11:39:26 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:27 compute-0 sudo[90060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:39:27 compute-0 sudo[90060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:27 compute-0 sudo[90060]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:27 compute-0 sudo[90085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:39:27 compute-0 sudo[90085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:27 compute-0 sudo[90085]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:27 compute-0 sudo[90110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:39:27 compute-0 sudo[90110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:27 compute-0 sudo[90110]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:27 compute-0 sudo[90135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -- raw list --format json
Nov 26 11:39:27 compute-0 sudo[90135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:27 compute-0 ceph-osd[90047]: bdev(0x55fea47dd800 /var/lib/ceph/osd/ceph-2/block) close
Nov 26 11:39:27 compute-0 podman[90194]: 2025-11-26 11:39:27.374285823 +0000 UTC m=+0.027890895 container create 78253c560864ec43504604d85eb02647cac0a61561946a4e6f71a337078fd17f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_driscoll, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:39:27 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v25: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Nov 26 11:39:27 compute-0 systemd[1]: Started libpod-conmon-78253c560864ec43504604d85eb02647cac0a61561946a4e6f71a337078fd17f.scope.
Nov 26 11:39:27 compute-0 ceph-osd[90047]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Nov 26 11:39:27 compute-0 ceph-osd[90047]: load: jerasure load: lrc 
Nov 26 11:39:27 compute-0 ceph-osd[90047]: bdev(0x55fea5615c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 26 11:39:27 compute-0 ceph-osd[90047]: bdev(0x55fea5615c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 26 11:39:27 compute-0 ceph-osd[90047]: bdev(0x55fea5615c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 11:39:27 compute-0 ceph-osd[90047]: bdev(0x55fea5615c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 11:39:27 compute-0 ceph-osd[90047]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 26 11:39:27 compute-0 ceph-osd[90047]: bdev(0x55fea5615c00 /var/lib/ceph/osd/ceph-2/block) close
Nov 26 11:39:27 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:39:27 compute-0 podman[90194]: 2025-11-26 11:39:27.439842833 +0000 UTC m=+0.093447915 container init 78253c560864ec43504604d85eb02647cac0a61561946a4e6f71a337078fd17f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_driscoll, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 26 11:39:27 compute-0 podman[90194]: 2025-11-26 11:39:27.4445948 +0000 UTC m=+0.098199862 container start 78253c560864ec43504604d85eb02647cac0a61561946a4e6f71a337078fd17f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_driscoll, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 26 11:39:27 compute-0 podman[90194]: 2025-11-26 11:39:27.447357266 +0000 UTC m=+0.100962338 container attach 78253c560864ec43504604d85eb02647cac0a61561946a4e6f71a337078fd17f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_driscoll, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:39:27 compute-0 vigorous_driscoll[90207]: 167 167
Nov 26 11:39:27 compute-0 systemd[1]: libpod-78253c560864ec43504604d85eb02647cac0a61561946a4e6f71a337078fd17f.scope: Deactivated successfully.
Nov 26 11:39:27 compute-0 podman[90194]: 2025-11-26 11:39:27.450229452 +0000 UTC m=+0.103834514 container died 78253c560864ec43504604d85eb02647cac0a61561946a4e6f71a337078fd17f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 26 11:39:27 compute-0 podman[90194]: 2025-11-26 11:39:27.364303306 +0000 UTC m=+0.017908388 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:39:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a39cdcd4d2ee1554e3beebc8f589dd7cc5c04c3fa5ebddcf872005608014d87-merged.mount: Deactivated successfully.
Nov 26 11:39:27 compute-0 podman[90194]: 2025-11-26 11:39:27.473070662 +0000 UTC m=+0.126675723 container remove 78253c560864ec43504604d85eb02647cac0a61561946a4e6f71a337078fd17f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_driscoll, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 26 11:39:27 compute-0 systemd[1]: libpod-conmon-78253c560864ec43504604d85eb02647cac0a61561946a4e6f71a337078fd17f.scope: Deactivated successfully.
Nov 26 11:39:27 compute-0 ceph-osd[89074]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 75.009 iops: 19202.301 elapsed_sec: 0.156
Nov 26 11:39:27 compute-0 ceph-osd[89074]: log_channel(cluster) log [WRN] : OSD bench result of 19202.301081 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 26 11:39:27 compute-0 ceph-osd[89074]: osd.1 0 waiting for initial osdmap
Nov 26 11:39:27 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-1[89070]: 2025-11-26T11:39:27.516+0000 7f40bd458640 -1 osd.1 0 waiting for initial osdmap
Nov 26 11:39:27 compute-0 ceph-osd[89074]: osd.1 11 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 26 11:39:27 compute-0 ceph-osd[89074]: osd.1 11 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Nov 26 11:39:27 compute-0 ceph-osd[89074]: osd.1 11 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 26 11:39:27 compute-0 ceph-osd[89074]: osd.1 11 check_osdmap_features require_osd_release unknown -> reef
Nov 26 11:39:27 compute-0 ceph-osd[89074]: osd.1 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 26 11:39:27 compute-0 ceph-osd[89074]: osd.1 11 set_numa_affinity not setting numa affinity
Nov 26 11:39:27 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-1[89070]: 2025-11-26T11:39:27.527+0000 7f40b8a80640 -1 osd.1 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 26 11:39:27 compute-0 ceph-osd[89074]: osd.1 11 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial
Nov 26 11:39:27 compute-0 podman[90235]: 2025-11-26 11:39:27.586290931 +0000 UTC m=+0.026847442 container create 922bcdf85b6e50cd2a3f4e4b07b01fec5b219cdd6e134702214a36da5a037886 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_morse, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:39:27 compute-0 systemd[1]: Started libpod-conmon-922bcdf85b6e50cd2a3f4e4b07b01fec5b219cdd6e134702214a36da5a037886.scope.
Nov 26 11:39:27 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:39:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/417774d7bb5bcaa850a217ef61456a7972b3210b0928f6b6a4797872a7a929e8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/417774d7bb5bcaa850a217ef61456a7972b3210b0928f6b6a4797872a7a929e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/417774d7bb5bcaa850a217ef61456a7972b3210b0928f6b6a4797872a7a929e8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/417774d7bb5bcaa850a217ef61456a7972b3210b0928f6b6a4797872a7a929e8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:27 compute-0 podman[90235]: 2025-11-26 11:39:27.635867958 +0000 UTC m=+0.076424488 container init 922bcdf85b6e50cd2a3f4e4b07b01fec5b219cdd6e134702214a36da5a037886 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 26 11:39:27 compute-0 podman[90235]: 2025-11-26 11:39:27.640356872 +0000 UTC m=+0.080913382 container start 922bcdf85b6e50cd2a3f4e4b07b01fec5b219cdd6e134702214a36da5a037886 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_morse, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:39:27 compute-0 podman[90235]: 2025-11-26 11:39:27.643169434 +0000 UTC m=+0.083725945 container attach 922bcdf85b6e50cd2a3f4e4b07b01fec5b219cdd6e134702214a36da5a037886 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 26 11:39:27 compute-0 podman[90235]: 2025-11-26 11:39:27.57552157 +0000 UTC m=+0.016078100 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: bdev(0x55fea5615c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 26 11:39:27 compute-0 ceph-osd[90047]: bdev(0x55fea5615c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 26 11:39:27 compute-0 ceph-osd[90047]: bdev(0x55fea5615c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 11:39:27 compute-0 ceph-osd[90047]: bdev(0x55fea5615c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 11:39:27 compute-0 ceph-osd[90047]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 26 11:39:27 compute-0 ceph-osd[90047]: bdev(0x55fea5615c00 /var/lib/ceph/osd/ceph-2/block) close
Nov 26 11:39:27 compute-0 ceph-mgr[75197]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/734671324; not ready for session (expect reconnect)
Nov 26 11:39:27 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 26 11:39:27 compute-0 ceph-mgr[75197]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 26 11:39:27 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 26 11:39:27 compute-0 ceph-mon[74928]: purged_snaps scrub starts
Nov 26 11:39:27 compute-0 ceph-mon[74928]: purged_snaps scrub ok
Nov 26 11:39:27 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:27 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:27 compute-0 ceph-mon[74928]: pgmap v25: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Nov 26 11:39:27 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 26 11:39:27 compute-0 ceph-osd[90047]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 26 11:39:27 compute-0 ceph-osd[90047]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 26 11:39:27 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Nov 26 11:39:27 compute-0 ceph-osd[90047]: bdev(0x55fea5615c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 26 11:39:27 compute-0 ceph-osd[90047]: bdev(0x55fea5615c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 26 11:39:27 compute-0 ceph-osd[90047]: bdev(0x55fea5615c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 11:39:27 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e12 e12: 3 total, 2 up, 3 in
Nov 26 11:39:27 compute-0 ceph-osd[90047]: bdev(0x55fea5615c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 11:39:27 compute-0 ceph-osd[90047]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 26 11:39:27 compute-0 ceph-osd[90047]: bdev(0x55fea57f8400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 26 11:39:27 compute-0 ceph-osd[90047]: bdev(0x55fea57f8400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 26 11:39:27 compute-0 ceph-osd[90047]: bdev(0x55fea57f8400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 11:39:27 compute-0 ceph-osd[90047]: bdev(0x55fea57f8400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 11:39:27 compute-0 ceph-osd[90047]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Nov 26 11:39:27 compute-0 ceph-osd[90047]: bluefs mount
Nov 26 11:39:27 compute-0 ceph-osd[90047]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 26 11:39:27 compute-0 ceph-mon[74928]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6806/734671324,v1:192.168.122.100:6807/734671324] boot
Nov 26 11:39:27 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e12: 3 total, 2 up, 3 in
Nov 26 11:39:27 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 26 11:39:27 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 26 11:39:27 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 26 11:39:27 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 11:39:27 compute-0 ceph-osd[90047]: bluefs mount shared_bdev_used = 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 26 11:39:27 compute-0 ceph-mgr[75197]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: RocksDB version: 7.9.2
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Git sha 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: DB SUMMARY
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: DB Session ID:  N9G13TKQFD5SLPP2OV3T
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: CURRENT file:  CURRENT
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: IDENTITY file:  IDENTITY
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                         Options.error_if_exists: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                       Options.create_if_missing: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                         Options.paranoid_checks: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                                     Options.env: 0x55fea5667c70
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                                Options.info_log: 0x55fea4864800
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.max_file_opening_threads: 16
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                              Options.statistics: (nil)
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                               Options.use_fsync: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                       Options.max_log_file_size: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                         Options.allow_fallocate: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                        Options.use_direct_reads: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:          Options.create_missing_column_families: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                              Options.db_log_dir: 
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                                 Options.wal_dir: db.wal
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                   Options.advise_random_on_open: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                    Options.write_buffer_manager: 0x55fea5772460
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                            Options.rate_limiter: (nil)
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                  Options.unordered_write: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                               Options.row_cache: None
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                              Options.wal_filter: None
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:             Options.allow_ingest_behind: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:             Options.two_write_queues: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:             Options.manual_wal_flush: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:             Options.wal_compression: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:             Options.atomic_flush: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                 Options.log_readahead_size: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:             Options.allow_data_in_errors: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:             Options.db_host_id: __hostname__
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:             Options.max_background_jobs: 4
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:             Options.max_background_compactions: -1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:             Options.max_subcompactions: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                          Options.max_open_files: -1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                          Options.bytes_per_sync: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                  Options.max_background_flushes: -1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Compression algorithms supported:
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         kZSTD supported: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         kXpressCompression supported: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         kBZip2Compression supported: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         kLZ4Compression supported: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         kZlibCompression supported: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         kLZ4HCCompression supported: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         kSnappyCompression supported: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:        Options.compaction_filter: None
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fea4864260)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55fea48511f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:          Options.compression: LZ4
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:             Options.num_levels: 7
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                           Options.bloom_locality: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                               Options.ttl: 2592000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                       Options.enable_blob_files: false
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                           Options.min_blob_size: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:           Options.merge_operator: None
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:        Options.compaction_filter: None
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fea4864260)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55fea48511f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:          Options.compression: LZ4
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:             Options.num_levels: 7
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                           Options.bloom_locality: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                               Options.ttl: 2592000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                       Options.enable_blob_files: false
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                           Options.min_blob_size: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:           Options.merge_operator: None
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:        Options.compaction_filter: None
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fea4864260)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55fea48511f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:          Options.compression: LZ4
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:             Options.num_levels: 7
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                           Options.bloom_locality: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                               Options.ttl: 2592000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                       Options.enable_blob_files: false
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                           Options.min_blob_size: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:           Options.merge_operator: None
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:        Options.compaction_filter: None
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 11:39:27 compute-0 ceph-osd[89074]: osd.1 12 state: booting -> active
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fea4864260)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55fea48511f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:          Options.compression: LZ4
Nov 26 11:39:27 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 12 pg[1.0( empty local-lis/les=0/0 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=12) [1] r=0 lpr=12 pi=[10,12)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:             Options.num_levels: 7
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                           Options.bloom_locality: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                               Options.ttl: 2592000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                       Options.enable_blob_files: false
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                           Options.min_blob_size: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:           Options.merge_operator: None
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:        Options.compaction_filter: None
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fea4864260)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55fea48511f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:          Options.compression: LZ4
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:             Options.num_levels: 7
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                           Options.bloom_locality: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                               Options.ttl: 2592000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                       Options.enable_blob_files: false
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                           Options.min_blob_size: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:           Options.merge_operator: None
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:        Options.compaction_filter: None
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fea4864260)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55fea48511f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:          Options.compression: LZ4
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:             Options.num_levels: 7
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                           Options.bloom_locality: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                               Options.ttl: 2592000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                       Options.enable_blob_files: false
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                           Options.min_blob_size: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:           Options.merge_operator: None
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:        Options.compaction_filter: None
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fea4864260)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55fea48511f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:          Options.compression: LZ4
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:             Options.num_levels: 7
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                           Options.bloom_locality: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                               Options.ttl: 2592000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                       Options.enable_blob_files: false
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                           Options.min_blob_size: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:           Options.merge_operator: None
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:        Options.compaction_filter: None
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fea4864200)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55fea4851090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:          Options.compression: LZ4
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:             Options.num_levels: 7
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                           Options.bloom_locality: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                               Options.ttl: 2592000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                       Options.enable_blob_files: false
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                           Options.min_blob_size: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:           Options.merge_operator: None
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:        Options.compaction_filter: None
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fea4864200)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55fea4851090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:          Options.compression: LZ4
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:             Options.num_levels: 7
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                           Options.bloom_locality: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                               Options.ttl: 2592000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                       Options.enable_blob_files: false
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                           Options.min_blob_size: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:           Options.merge_operator: None
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:        Options.compaction_filter: None
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fea4864200)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55fea4851090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:          Options.compression: LZ4
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:             Options.num_levels: 7
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                           Options.bloom_locality: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                               Options.ttl: 2592000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                       Options.enable_blob_files: false
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                           Options.min_blob_size: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 4547fb43-28b9-4bfb-8565-a1cb33b39748
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764157167992250, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764157167992432, "job": 1, "event": "recovery_finished"}
Nov 26 11:39:27 compute-0 ceph-osd[90047]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 26 11:39:27 compute-0 ceph-osd[90047]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Nov 26 11:39:27 compute-0 ceph-osd[90047]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Nov 26 11:39:27 compute-0 ceph-osd[90047]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 26 11:39:27 compute-0 ceph-osd[90047]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Nov 26 11:39:27 compute-0 ceph-osd[90047]: freelist init
Nov 26 11:39:27 compute-0 ceph-osd[90047]: freelist _read_cfg
Nov 26 11:39:27 compute-0 ceph-osd[90047]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 26 11:39:27 compute-0 ceph-osd[90047]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 26 11:39:27 compute-0 ceph-osd[90047]: bluefs umount
Nov 26 11:39:27 compute-0 ceph-osd[90047]: bdev(0x55fea57f8400 /var/lib/ceph/osd/ceph-2/block) close
Nov 26 11:39:28 compute-0 ceph-osd[90047]: bdev(0x55fea57f8400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 26 11:39:28 compute-0 ceph-osd[90047]: bdev(0x55fea57f8400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 26 11:39:28 compute-0 ceph-osd[90047]: bdev(0x55fea57f8400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 11:39:28 compute-0 ceph-osd[90047]: bdev(0x55fea57f8400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 11:39:28 compute-0 ceph-osd[90047]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Nov 26 11:39:28 compute-0 ceph-osd[90047]: bluefs mount
Nov 26 11:39:28 compute-0 ceph-osd[90047]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: bluefs mount shared_bdev_used = 4718592
Nov 26 11:39:28 compute-0 ceph-osd[90047]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: RocksDB version: 7.9.2
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Git sha 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: DB SUMMARY
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: DB Session ID:  N9G13TKQFD5SLPP2OV3S
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: CURRENT file:  CURRENT
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: IDENTITY file:  IDENTITY
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                         Options.error_if_exists: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                       Options.create_if_missing: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                         Options.paranoid_checks: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                                     Options.env: 0x55fea5818460
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                                Options.info_log: 0x55fea485ae40
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.max_file_opening_threads: 16
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                              Options.statistics: (nil)
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                               Options.use_fsync: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                       Options.max_log_file_size: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                         Options.allow_fallocate: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                        Options.use_direct_reads: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:          Options.create_missing_column_families: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                              Options.db_log_dir: 
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                                 Options.wal_dir: db.wal
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                   Options.advise_random_on_open: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                    Options.write_buffer_manager: 0x55fea57726e0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                            Options.rate_limiter: (nil)
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                  Options.unordered_write: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                               Options.row_cache: None
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                              Options.wal_filter: None
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:             Options.allow_ingest_behind: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:             Options.two_write_queues: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:             Options.manual_wal_flush: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:             Options.wal_compression: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:             Options.atomic_flush: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                 Options.log_readahead_size: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:             Options.allow_data_in_errors: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:             Options.db_host_id: __hostname__
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:             Options.max_background_jobs: 4
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:             Options.max_background_compactions: -1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:             Options.max_subcompactions: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                          Options.max_open_files: -1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                          Options.bytes_per_sync: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                  Options.max_background_flushes: -1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Compression algorithms supported:
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         kZSTD supported: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         kXpressCompression supported: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         kBZip2Compression supported: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         kLZ4Compression supported: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         kZlibCompression supported: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         kLZ4HCCompression supported: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         kSnappyCompression supported: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:        Options.compaction_filter: None
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fea4837220)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55fea48511f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:          Options.compression: LZ4
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:             Options.num_levels: 7
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                           Options.bloom_locality: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                               Options.ttl: 2592000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                       Options.enable_blob_files: false
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                           Options.min_blob_size: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:           Options.merge_operator: None
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:        Options.compaction_filter: None
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fea4837220)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55fea48511f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:          Options.compression: LZ4
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:             Options.num_levels: 7
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                           Options.bloom_locality: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                               Options.ttl: 2592000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                       Options.enable_blob_files: false
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                           Options.min_blob_size: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:           Options.merge_operator: None
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:        Options.compaction_filter: None
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fea4837220)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55fea48511f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:          Options.compression: LZ4
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:             Options.num_levels: 7
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                           Options.bloom_locality: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                               Options.ttl: 2592000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                       Options.enable_blob_files: false
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                           Options.min_blob_size: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:           Options.merge_operator: None
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:        Options.compaction_filter: None
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fea4837220)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55fea48511f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:          Options.compression: LZ4
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:             Options.num_levels: 7
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                           Options.bloom_locality: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                               Options.ttl: 2592000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                       Options.enable_blob_files: false
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                           Options.min_blob_size: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:           Options.merge_operator: None
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:        Options.compaction_filter: None
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fea4837220)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55fea48511f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:          Options.compression: LZ4
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:             Options.num_levels: 7
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                           Options.bloom_locality: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                               Options.ttl: 2592000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                       Options.enable_blob_files: false
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                           Options.min_blob_size: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:           Options.merge_operator: None
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:        Options.compaction_filter: None
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fea4837220)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55fea48511f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:          Options.compression: LZ4
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:             Options.num_levels: 7
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                           Options.bloom_locality: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                               Options.ttl: 2592000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                       Options.enable_blob_files: false
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                           Options.min_blob_size: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:           Options.merge_operator: None
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:        Options.compaction_filter: None
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fea4837220)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55fea48511f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:          Options.compression: LZ4
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:             Options.num_levels: 7
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                           Options.bloom_locality: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                               Options.ttl: 2592000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                       Options.enable_blob_files: false
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                           Options.min_blob_size: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:           Options.merge_operator: None
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:        Options.compaction_filter: None
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fea485bfa0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55fea4851090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:          Options.compression: LZ4
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:             Options.num_levels: 7
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 11:39:28 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0) v1
Nov 26 11:39:28 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/1840019852,v1:192.168.122.100:6811/1840019852]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                           Options.bloom_locality: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                               Options.ttl: 2592000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                       Options.enable_blob_files: false
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                           Options.min_blob_size: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:           Options.merge_operator: None
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:        Options.compaction_filter: None
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fea485bfa0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55fea4851090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:          Options.compression: LZ4
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:             Options.num_levels: 7
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                           Options.bloom_locality: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                               Options.ttl: 2592000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                       Options.enable_blob_files: false
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                           Options.min_blob_size: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:           Options.merge_operator: None
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:        Options.compaction_filter: None
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fea485bfa0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55fea4851090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:          Options.compression: LZ4
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:             Options.num_levels: 7
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                           Options.bloom_locality: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                               Options.ttl: 2592000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                       Options.enable_blob_files: false
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                           Options.min_blob_size: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 4547fb43-28b9-4bfb-8565-a1cb33b39748
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764157168294839, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764157168298202, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764157168, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4547fb43-28b9-4bfb-8565-a1cb33b39748", "db_session_id": "N9G13TKQFD5SLPP2OV3S", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764157168300044, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764157168, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4547fb43-28b9-4bfb-8565-a1cb33b39748", "db_session_id": "N9G13TKQFD5SLPP2OV3S", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764157168301990, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764157168, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4547fb43-28b9-4bfb-8565-a1cb33b39748", "db_session_id": "N9G13TKQFD5SLPP2OV3S", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764157168302826, "job": 1, "event": "recovery_finished"}
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55fea5825c00
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: DB pointer 0x55fea575ba00
Nov 26 11:39:28 compute-0 ceph-osd[90047]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 26 11:39:28 compute-0 ceph-osd[90047]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Nov 26 11:39:28 compute-0 ceph-osd[90047]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Nov 26 11:39:28 compute-0 ceph-osd[90047]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 26 11:39:28 compute-0 ceph-osd[90047]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 26 11:39:28 compute-0 ceph-osd[90047]: _get_class not permitted to load lua
Nov 26 11:39:28 compute-0 ceph-osd[90047]: _get_class not permitted to load sdk
Nov 26 11:39:28 compute-0 ceph-osd[90047]: _get_class not permitted to load test_remote_reads
Nov 26 11:39:28 compute-0 ceph-osd[90047]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 26 11:39:28 compute-0 ceph-osd[90047]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 26 11:39:28 compute-0 ceph-osd[90047]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 26 11:39:28 compute-0 ceph-osd[90047]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 26 11:39:28 compute-0 ceph-osd[90047]: osd.2 0 load_pgs
Nov 26 11:39:28 compute-0 ceph-osd[90047]: osd.2 0 load_pgs opened 0 pgs
Nov 26 11:39:28 compute-0 ceph-osd[90047]: osd.2 0 log_to_monitors true
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 11:39:28 compute-0 ceph-osd[90047]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 3 writes, 4 keys, 3 commit groups, 1.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 3 writes, 1 syncs, 3.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3 writes, 4 keys, 3 commit groups, 1.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 3 writes, 1 syncs, 3.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fea48511f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 0.000388 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fea48511f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 0.000388 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fea48511f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 0.000388 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fea48511f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 0.000388 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fea48511f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 0.000388 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fea48511f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 0.000388 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fea48511f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 0.000388 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fea4851090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fea4851090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fea4851090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fea48511f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 0.000388 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fea48511f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 0.000388 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 26 11:39:28 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-2[90043]: 2025-11-26T11:39:28.319+0000 7f58d384c740 -1 osd.2 0 log_to_monitors true
Nov 26 11:39:28 compute-0 festive_morse[90248]: {
Nov 26 11:39:28 compute-0 festive_morse[90248]:     "2627095b-eef8-4027-bfef-68bf7cb6801f": {
Nov 26 11:39:28 compute-0 festive_morse[90248]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:39:28 compute-0 festive_morse[90248]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 11:39:28 compute-0 festive_morse[90248]:         "osd_id": 1,
Nov 26 11:39:28 compute-0 festive_morse[90248]:         "osd_uuid": "2627095b-eef8-4027-bfef-68bf7cb6801f",
Nov 26 11:39:28 compute-0 festive_morse[90248]:         "type": "bluestore"
Nov 26 11:39:28 compute-0 festive_morse[90248]:     },
Nov 26 11:39:28 compute-0 festive_morse[90248]:     "a9ad59a0-aa2e-4d92-b571-519d2d145b6a": {
Nov 26 11:39:28 compute-0 festive_morse[90248]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:39:28 compute-0 festive_morse[90248]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 11:39:28 compute-0 festive_morse[90248]:         "osd_id": 0,
Nov 26 11:39:28 compute-0 festive_morse[90248]:         "osd_uuid": "a9ad59a0-aa2e-4d92-b571-519d2d145b6a",
Nov 26 11:39:28 compute-0 festive_morse[90248]:         "type": "bluestore"
Nov 26 11:39:28 compute-0 festive_morse[90248]:     },
Nov 26 11:39:28 compute-0 festive_morse[90248]:     "d56156fb-7361-4bef-b06b-1320109b4323": {
Nov 26 11:39:28 compute-0 festive_morse[90248]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:39:28 compute-0 festive_morse[90248]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 11:39:28 compute-0 festive_morse[90248]:         "osd_id": 2,
Nov 26 11:39:28 compute-0 festive_morse[90248]:         "osd_uuid": "d56156fb-7361-4bef-b06b-1320109b4323",
Nov 26 11:39:28 compute-0 festive_morse[90248]:         "type": "bluestore"
Nov 26 11:39:28 compute-0 festive_morse[90248]:     }
Nov 26 11:39:28 compute-0 festive_morse[90248]: }
Nov 26 11:39:28 compute-0 systemd[1]: libpod-922bcdf85b6e50cd2a3f4e4b07b01fec5b219cdd6e134702214a36da5a037886.scope: Deactivated successfully.
Nov 26 11:39:28 compute-0 podman[90235]: 2025-11-26 11:39:28.40486102 +0000 UTC m=+0.845417530 container died 922bcdf85b6e50cd2a3f4e4b07b01fec5b219cdd6e134702214a36da5a037886 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:39:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-417774d7bb5bcaa850a217ef61456a7972b3210b0928f6b6a4797872a7a929e8-merged.mount: Deactivated successfully.
Nov 26 11:39:28 compute-0 podman[90235]: 2025-11-26 11:39:28.434568093 +0000 UTC m=+0.875124603 container remove 922bcdf85b6e50cd2a3f4e4b07b01fec5b219cdd6e134702214a36da5a037886 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_morse, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:39:28 compute-0 systemd[1]: libpod-conmon-922bcdf85b6e50cd2a3f4e4b07b01fec5b219cdd6e134702214a36da5a037886.scope: Deactivated successfully.
Nov 26 11:39:28 compute-0 sudo[90135]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:28 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 11:39:28 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:28 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 11:39:28 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:28 compute-0 sudo[90704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:39:28 compute-0 sudo[90704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:28 compute-0 sudo[90704]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:28 compute-0 sudo[90729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 26 11:39:28 compute-0 sudo[90729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:28 compute-0 sudo[90729]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:28 compute-0 sudo[90754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:39:28 compute-0 sudo[90754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:28 compute-0 sudo[90754]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:28 compute-0 sudo[90779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:39:28 compute-0 sudo[90779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:28 compute-0 sudo[90779]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:28 compute-0 sudo[90804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:39:28 compute-0 sudo[90804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:28 compute-0 sudo[90804]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:28 compute-0 sudo[90829]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 26 11:39:28 compute-0 sudo[90829]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:28 compute-0 ceph-mon[74928]: OSD bench result of 19202.301081 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 26 11:39:28 compute-0 ceph-mon[74928]: osd.1 [v2:192.168.122.100:6806/734671324,v1:192.168.122.100:6807/734671324] boot
Nov 26 11:39:28 compute-0 ceph-mon[74928]: osdmap e12: 3 total, 2 up, 3 in
Nov 26 11:39:28 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 26 11:39:28 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 11:39:28 compute-0 ceph-mon[74928]: from='osd.2 [v2:192.168.122.100:6810/1840019852,v1:192.168.122.100:6811/1840019852]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 26 11:39:28 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:28 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:28 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Nov 26 11:39:28 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/1840019852,v1:192.168.122.100:6811/1840019852]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Nov 26 11:39:28 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e13 e13: 3 total, 2 up, 3 in
Nov 26 11:39:28 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 2 up, 3 in
Nov 26 11:39:28 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 26 11:39:28 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 11:39:28 compute-0 ceph-mgr[75197]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 26 11:39:28 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Nov 26 11:39:28 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/1840019852,v1:192.168.122.100:6811/1840019852]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 26 11:39:28 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e13 create-or-move crush item name 'osd.2' initial_weight 0.0195 at location {host=compute-0,root=default}
Nov 26 11:39:28 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 13 pg[1.0( empty local-lis/les=12/13 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=12) [1] r=0 lpr=12 pi=[10,12)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:28 compute-0 ceph-mgr[75197]: [devicehealth INFO root] creating main.db for devicehealth
Nov 26 11:39:29 compute-0 ceph-mgr[75197]: [devicehealth INFO root] Check health
Nov 26 11:39:29 compute-0 ceph-mgr[75197]: [devicehealth ERROR root] Fail to parse JSON result from daemon osd.2 ()
Nov 26 11:39:29 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Nov 26 11:39:29 compute-0 podman[90911]: 2025-11-26 11:39:29.030626085 +0000 UTC m=+0.036790645 container exec 810eaed6cbde00330c0646cedaef5c2ae94579236d67ba42b8ef02d055d04ad5 (image=quay.io/ceph/ceph:v18, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mon-compute-0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:39:29 compute-0 sudo[90935]:     ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda
Nov 26 11:39:29 compute-0 sudo[90935]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 26 11:39:29 compute-0 sudo[90935]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167)
Nov 26 11:39:29 compute-0 sudo[90935]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:29 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Nov 26 11:39:29 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 26 11:39:29 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 26 11:39:29 compute-0 podman[90911]: 2025-11-26 11:39:29.113366306 +0000 UTC m=+0.119530886 container exec_died 810eaed6cbde00330c0646cedaef5c2ae94579236d67ba42b8ef02d055d04ad5 (image=quay.io/ceph/ceph:v18, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mon-compute-0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:39:29 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 26 11:39:29 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 26 11:39:29 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v28: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Nov 26 11:39:29 compute-0 sudo[90829]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:29 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 11:39:29 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:29 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 11:39:29 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:29 compute-0 sudo[91023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:39:29 compute-0 sudo[91023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:29 compute-0 sudo[91023]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:29 compute-0 sudo[91048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:39:29 compute-0 sudo[91048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:29 compute-0 sudo[91048]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:29 compute-0 sudo[91073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:39:29 compute-0 sudo[91073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:29 compute-0 sudo[91073]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:29 compute-0 sudo[91098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -- inventory --format=json-pretty --filter-for-batch
Nov 26 11:39:29 compute-0 sudo[91098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:29 compute-0 podman[91153]: 2025-11-26 11:39:29.790546587 +0000 UTC m=+0.026214141 container create 8dabf530da834cdbcac36bd6119b06d15f7c6eb9290ce5bfb3c9ef9f77599432 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:39:29 compute-0 systemd[1]: Started libpod-conmon-8dabf530da834cdbcac36bd6119b06d15f7c6eb9290ce5bfb3c9ef9f77599432.scope.
Nov 26 11:39:29 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:39:29 compute-0 podman[91153]: 2025-11-26 11:39:29.839756834 +0000 UTC m=+0.075424398 container init 8dabf530da834cdbcac36bd6119b06d15f7c6eb9290ce5bfb3c9ef9f77599432 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 26 11:39:29 compute-0 podman[91153]: 2025-11-26 11:39:29.844286095 +0000 UTC m=+0.079953649 container start 8dabf530da834cdbcac36bd6119b06d15f7c6eb9290ce5bfb3c9ef9f77599432 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True)
Nov 26 11:39:29 compute-0 podman[91153]: 2025-11-26 11:39:29.845614873 +0000 UTC m=+0.081282428 container attach 8dabf530da834cdbcac36bd6119b06d15f7c6eb9290ce5bfb3c9ef9f77599432 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cerf, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:39:29 compute-0 beautiful_cerf[91165]: 167 167
Nov 26 11:39:29 compute-0 podman[91153]: 2025-11-26 11:39:29.848134155 +0000 UTC m=+0.083801709 container died 8dabf530da834cdbcac36bd6119b06d15f7c6eb9290ce5bfb3c9ef9f77599432 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cerf, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:39:29 compute-0 systemd[1]: libpod-8dabf530da834cdbcac36bd6119b06d15f7c6eb9290ce5bfb3c9ef9f77599432.scope: Deactivated successfully.
Nov 26 11:39:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb3b6d0b8facdf39e5b441f65219405bb1d8287a9ad4c10eb031bacd94d22ca0-merged.mount: Deactivated successfully.
Nov 26 11:39:29 compute-0 podman[91153]: 2025-11-26 11:39:29.865079763 +0000 UTC m=+0.100747316 container remove 8dabf530da834cdbcac36bd6119b06d15f7c6eb9290ce5bfb3c9ef9f77599432 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cerf, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:39:29 compute-0 podman[91153]: 2025-11-26 11:39:29.779769132 +0000 UTC m=+0.015436706 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:39:29 compute-0 systemd[1]: libpod-conmon-8dabf530da834cdbcac36bd6119b06d15f7c6eb9290ce5bfb3c9ef9f77599432.scope: Deactivated successfully.
Nov 26 11:39:29 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Nov 26 11:39:29 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/1840019852,v1:192.168.122.100:6811/1840019852]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 26 11:39:29 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e14 e14: 3 total, 2 up, 3 in
Nov 26 11:39:29 compute-0 ceph-osd[90047]: osd.2 0 done with init, starting boot process
Nov 26 11:39:29 compute-0 ceph-osd[90047]: osd.2 0 start_boot
Nov 26 11:39:29 compute-0 ceph-osd[90047]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 26 11:39:29 compute-0 ceph-osd[90047]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 26 11:39:29 compute-0 ceph-osd[90047]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 26 11:39:29 compute-0 ceph-osd[90047]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 26 11:39:29 compute-0 ceph-osd[90047]: osd.2 0  bench count 12288000 bsize 4 KiB
Nov 26 11:39:29 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 2 up, 3 in
Nov 26 11:39:29 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 26 11:39:29 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 11:39:29 compute-0 ceph-mgr[75197]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 26 11:39:29 compute-0 ceph-mon[74928]: from='osd.2 [v2:192.168.122.100:6810/1840019852,v1:192.168.122.100:6811/1840019852]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Nov 26 11:39:29 compute-0 ceph-mon[74928]: osdmap e13: 3 total, 2 up, 3 in
Nov 26 11:39:29 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 11:39:29 compute-0 ceph-mon[74928]: from='osd.2 [v2:192.168.122.100:6810/1840019852,v1:192.168.122.100:6811/1840019852]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 26 11:39:29 compute-0 ceph-mon[74928]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Nov 26 11:39:29 compute-0 ceph-mon[74928]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Nov 26 11:39:29 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 26 11:39:29 compute-0 ceph-mon[74928]: pgmap v28: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Nov 26 11:39:29 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:29 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:29 compute-0 ceph-mgr[75197]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/1840019852; not ready for session (expect reconnect)
Nov 26 11:39:29 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.mwrktr(active, since 48s)
Nov 26 11:39:29 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 26 11:39:29 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 11:39:29 compute-0 podman[91187]: 2025-11-26 11:39:29.982764361 +0000 UTC m=+0.035515559 container create 2a2d95bb1d2e95bb6fdd1812c8f9aa4180695f82261776141bfa9b618b805b12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_wilson, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 26 11:39:29 compute-0 ceph-mgr[75197]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 26 11:39:30 compute-0 systemd[1]: Started libpod-conmon-2a2d95bb1d2e95bb6fdd1812c8f9aa4180695f82261776141bfa9b618b805b12.scope.
Nov 26 11:39:30 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:39:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6939316484ddcd91fb2075d36f7d544135dfaee873ca0a246770f749c6fb36f6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6939316484ddcd91fb2075d36f7d544135dfaee873ca0a246770f749c6fb36f6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6939316484ddcd91fb2075d36f7d544135dfaee873ca0a246770f749c6fb36f6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6939316484ddcd91fb2075d36f7d544135dfaee873ca0a246770f749c6fb36f6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:30 compute-0 podman[91187]: 2025-11-26 11:39:30.039135223 +0000 UTC m=+0.091886421 container init 2a2d95bb1d2e95bb6fdd1812c8f9aa4180695f82261776141bfa9b618b805b12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Nov 26 11:39:30 compute-0 podman[91187]: 2025-11-26 11:39:30.0439708 +0000 UTC m=+0.096721998 container start 2a2d95bb1d2e95bb6fdd1812c8f9aa4180695f82261776141bfa9b618b805b12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_wilson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 26 11:39:30 compute-0 podman[91187]: 2025-11-26 11:39:30.045155082 +0000 UTC m=+0.097906279 container attach 2a2d95bb1d2e95bb6fdd1812c8f9aa4180695f82261776141bfa9b618b805b12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_wilson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:39:30 compute-0 podman[91187]: 2025-11-26 11:39:29.964600476 +0000 UTC m=+0.017351674 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:39:30 compute-0 ceph-mon[74928]: from='osd.2 [v2:192.168.122.100:6810/1840019852,v1:192.168.122.100:6811/1840019852]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 26 11:39:30 compute-0 ceph-mon[74928]: osdmap e14: 3 total, 2 up, 3 in
Nov 26 11:39:30 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 11:39:30 compute-0 ceph-mon[74928]: mgrmap e9: compute-0.mwrktr(active, since 48s)
Nov 26 11:39:30 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 11:39:30 compute-0 ceph-mgr[75197]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/1840019852; not ready for session (expect reconnect)
Nov 26 11:39:30 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 26 11:39:30 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 11:39:30 compute-0 ceph-mgr[75197]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 26 11:39:31 compute-0 goofy_wilson[91200]: [
Nov 26 11:39:31 compute-0 goofy_wilson[91200]:     {
Nov 26 11:39:31 compute-0 goofy_wilson[91200]:         "available": false,
Nov 26 11:39:31 compute-0 goofy_wilson[91200]:         "ceph_device": false,
Nov 26 11:39:31 compute-0 goofy_wilson[91200]:         "device_id": "QEMU_DVD-ROM_QM00001",
Nov 26 11:39:31 compute-0 goofy_wilson[91200]:         "lsm_data": {},
Nov 26 11:39:31 compute-0 goofy_wilson[91200]:         "lvs": [],
Nov 26 11:39:31 compute-0 goofy_wilson[91200]:         "path": "/dev/sr0",
Nov 26 11:39:31 compute-0 goofy_wilson[91200]:         "rejected_reasons": [
Nov 26 11:39:31 compute-0 goofy_wilson[91200]:             "Insufficient space (<5GB)",
Nov 26 11:39:31 compute-0 goofy_wilson[91200]:             "Has a FileSystem"
Nov 26 11:39:31 compute-0 goofy_wilson[91200]:         ],
Nov 26 11:39:31 compute-0 goofy_wilson[91200]:         "sys_api": {
Nov 26 11:39:31 compute-0 goofy_wilson[91200]:             "actuators": null,
Nov 26 11:39:31 compute-0 goofy_wilson[91200]:             "device_nodes": "sr0",
Nov 26 11:39:31 compute-0 goofy_wilson[91200]:             "devname": "sr0",
Nov 26 11:39:31 compute-0 goofy_wilson[91200]:             "human_readable_size": "474.00 KB",
Nov 26 11:39:31 compute-0 goofy_wilson[91200]:             "id_bus": "ata",
Nov 26 11:39:31 compute-0 goofy_wilson[91200]:             "model": "QEMU DVD-ROM",
Nov 26 11:39:31 compute-0 goofy_wilson[91200]:             "nr_requests": "64",
Nov 26 11:39:31 compute-0 goofy_wilson[91200]:             "parent": "/dev/sr0",
Nov 26 11:39:31 compute-0 goofy_wilson[91200]:             "partitions": {},
Nov 26 11:39:31 compute-0 goofy_wilson[91200]:             "path": "/dev/sr0",
Nov 26 11:39:31 compute-0 goofy_wilson[91200]:             "removable": "1",
Nov 26 11:39:31 compute-0 goofy_wilson[91200]:             "rev": "2.5+",
Nov 26 11:39:31 compute-0 goofy_wilson[91200]:             "ro": "0",
Nov 26 11:39:31 compute-0 goofy_wilson[91200]:             "rotational": "1",
Nov 26 11:39:31 compute-0 goofy_wilson[91200]:             "sas_address": "",
Nov 26 11:39:31 compute-0 goofy_wilson[91200]:             "sas_device_handle": "",
Nov 26 11:39:31 compute-0 goofy_wilson[91200]:             "scheduler_mode": "mq-deadline",
Nov 26 11:39:31 compute-0 goofy_wilson[91200]:             "sectors": 0,
Nov 26 11:39:31 compute-0 goofy_wilson[91200]:             "sectorsize": "2048",
Nov 26 11:39:31 compute-0 goofy_wilson[91200]:             "size": 485376.0,
Nov 26 11:39:31 compute-0 goofy_wilson[91200]:             "support_discard": "2048",
Nov 26 11:39:31 compute-0 goofy_wilson[91200]:             "type": "disk",
Nov 26 11:39:31 compute-0 goofy_wilson[91200]:             "vendor": "QEMU"
Nov 26 11:39:31 compute-0 goofy_wilson[91200]:         }
Nov 26 11:39:31 compute-0 goofy_wilson[91200]:     }
Nov 26 11:39:31 compute-0 goofy_wilson[91200]: ]
Nov 26 11:39:31 compute-0 systemd[1]: libpod-2a2d95bb1d2e95bb6fdd1812c8f9aa4180695f82261776141bfa9b618b805b12.scope: Deactivated successfully.
Nov 26 11:39:31 compute-0 systemd[1]: libpod-2a2d95bb1d2e95bb6fdd1812c8f9aa4180695f82261776141bfa9b618b805b12.scope: Consumed 1.123s CPU time.
Nov 26 11:39:31 compute-0 podman[91187]: 2025-11-26 11:39:31.141787854 +0000 UTC m=+1.194539052 container died 2a2d95bb1d2e95bb6fdd1812c8f9aa4180695f82261776141bfa9b618b805b12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_wilson, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:39:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-6939316484ddcd91fb2075d36f7d544135dfaee873ca0a246770f749c6fb36f6-merged.mount: Deactivated successfully.
Nov 26 11:39:31 compute-0 podman[91187]: 2025-11-26 11:39:31.177576393 +0000 UTC m=+1.230327592 container remove 2a2d95bb1d2e95bb6fdd1812c8f9aa4180695f82261776141bfa9b618b805b12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 26 11:39:31 compute-0 systemd[1]: libpod-conmon-2a2d95bb1d2e95bb6fdd1812c8f9aa4180695f82261776141bfa9b618b805b12.scope: Deactivated successfully.
Nov 26 11:39:31 compute-0 sudo[91098]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:31 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 11:39:31 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:31 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 11:39:31 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:31 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 11:39:31 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:31 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 11:39:31 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:31 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) v1
Nov 26 11:39:31 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 26 11:39:31 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) v1
Nov 26 11:39:31 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 26 11:39:31 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) v1
Nov 26 11:39:31 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 26 11:39:31 compute-0 ceph-mgr[75197]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43933k
Nov 26 11:39:31 compute-0 ceph-mgr[75197]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43933k
Nov 26 11:39:31 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Nov 26 11:39:31 compute-0 ceph-mgr[75197]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44987733: error parsing value: Value '44987733' is below minimum 939524096
Nov 26 11:39:31 compute-0 ceph-mgr[75197]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44987733: error parsing value: Value '44987733' is below minimum 939524096
Nov 26 11:39:31 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 11:39:31 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:39:31 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 11:39:31 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 11:39:31 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 11:39:31 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:31 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev dc510d5e-9cb3-40f0-9336-d277f5d1158c does not exist
Nov 26 11:39:31 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 08d63de5-52d6-4c6a-90ff-31f7322d9355 does not exist
Nov 26 11:39:31 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 11f8ebff-092d-402b-8d66-8334b7998883 does not exist
Nov 26 11:39:31 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 11:39:31 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 11:39:31 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 11:39:31 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 11:39:31 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 11:39:31 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:39:31 compute-0 sudo[93178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:39:31 compute-0 sudo[93178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:31 compute-0 sudo[93178]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:31 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v30: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Nov 26 11:39:31 compute-0 sudo[93203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:39:31 compute-0 sudo[93203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:31 compute-0 sudo[93203]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:31 compute-0 sudo[93228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:39:31 compute-0 sudo[93228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:31 compute-0 sudo[93228]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:31 compute-0 sudo[93253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 26 11:39:31 compute-0 sudo[93253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:31 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e14 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:39:31 compute-0 ceph-osd[90047]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 94.535 iops: 24201.081 elapsed_sec: 0.124
Nov 26 11:39:31 compute-0 ceph-osd[90047]: log_channel(cluster) log [WRN] : OSD bench result of 24201.080778 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 26 11:39:31 compute-0 ceph-osd[90047]: osd.2 0 waiting for initial osdmap
Nov 26 11:39:31 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-2[90043]: 2025-11-26T11:39:31.704+0000 7f58cffe3640 -1 osd.2 0 waiting for initial osdmap
Nov 26 11:39:31 compute-0 ceph-osd[90047]: osd.2 14 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 26 11:39:31 compute-0 ceph-osd[90047]: osd.2 14 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Nov 26 11:39:31 compute-0 ceph-osd[90047]: osd.2 14 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 26 11:39:31 compute-0 ceph-osd[90047]: osd.2 14 check_osdmap_features require_osd_release unknown -> reef
Nov 26 11:39:31 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-osd-2[90043]: 2025-11-26T11:39:31.720+0000 7f58cadf4640 -1 osd.2 14 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 26 11:39:31 compute-0 ceph-osd[90047]: osd.2 14 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 26 11:39:31 compute-0 ceph-osd[90047]: osd.2 14 set_numa_affinity not setting numa affinity
Nov 26 11:39:31 compute-0 ceph-osd[90047]: osd.2 14 _collect_metadata loop5:  no unique device id for loop5: fallback method has no model nor serial
Nov 26 11:39:31 compute-0 podman[93309]: 2025-11-26 11:39:31.730322435 +0000 UTC m=+0.041345515 container create 6fb08eae26fc5a8a7f4e3091c72be3a09c2d8267bcfbf4e5b2bedf09f211eb7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_payne, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:39:31 compute-0 systemd[1]: Started libpod-conmon-6fb08eae26fc5a8a7f4e3091c72be3a09c2d8267bcfbf4e5b2bedf09f211eb7e.scope.
Nov 26 11:39:31 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:39:31 compute-0 podman[93309]: 2025-11-26 11:39:31.776902808 +0000 UTC m=+0.087925908 container init 6fb08eae26fc5a8a7f4e3091c72be3a09c2d8267bcfbf4e5b2bedf09f211eb7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_payne, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 26 11:39:31 compute-0 podman[93309]: 2025-11-26 11:39:31.781834157 +0000 UTC m=+0.092857237 container start 6fb08eae26fc5a8a7f4e3091c72be3a09c2d8267bcfbf4e5b2bedf09f211eb7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Nov 26 11:39:31 compute-0 podman[93309]: 2025-11-26 11:39:31.783019912 +0000 UTC m=+0.094043012 container attach 6fb08eae26fc5a8a7f4e3091c72be3a09c2d8267bcfbf4e5b2bedf09f211eb7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_payne, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 26 11:39:31 compute-0 upbeat_payne[93324]: 167 167
Nov 26 11:39:31 compute-0 systemd[1]: libpod-6fb08eae26fc5a8a7f4e3091c72be3a09c2d8267bcfbf4e5b2bedf09f211eb7e.scope: Deactivated successfully.
Nov 26 11:39:31 compute-0 conmon[93324]: conmon 6fb08eae26fc5a8a7f4e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6fb08eae26fc5a8a7f4e3091c72be3a09c2d8267bcfbf4e5b2bedf09f211eb7e.scope/container/memory.events
Nov 26 11:39:31 compute-0 podman[93309]: 2025-11-26 11:39:31.785710481 +0000 UTC m=+0.096733582 container died 6fb08eae26fc5a8a7f4e3091c72be3a09c2d8267bcfbf4e5b2bedf09f211eb7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_payne, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:39:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-44d98dc08f3acea63d7b81efa4b133d04fdb2483125702bc968ff6bd17d5e97e-merged.mount: Deactivated successfully.
Nov 26 11:39:31 compute-0 podman[93309]: 2025-11-26 11:39:31.804655608 +0000 UTC m=+0.115678688 container remove 6fb08eae26fc5a8a7f4e3091c72be3a09c2d8267bcfbf4e5b2bedf09f211eb7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_payne, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:39:31 compute-0 podman[93309]: 2025-11-26 11:39:31.714942338 +0000 UTC m=+0.025965418 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:39:31 compute-0 systemd[1]: libpod-conmon-6fb08eae26fc5a8a7f4e3091c72be3a09c2d8267bcfbf4e5b2bedf09f211eb7e.scope: Deactivated successfully.
Nov 26 11:39:31 compute-0 podman[93346]: 2025-11-26 11:39:31.91906051 +0000 UTC m=+0.030453219 container create 0c791639780efc27c6a2831ec8eecc8e9e0f906b63ff8a3d6647bf89b4662d8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 26 11:39:31 compute-0 systemd[1]: Started libpod-conmon-0c791639780efc27c6a2831ec8eecc8e9e0f906b63ff8a3d6647bf89b4662d8c.scope.
Nov 26 11:39:31 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:39:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfac8192f55fa42d33729a9ed551f86e98a96631db59b79008f281df6051ffc6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfac8192f55fa42d33729a9ed551f86e98a96631db59b79008f281df6051ffc6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfac8192f55fa42d33729a9ed551f86e98a96631db59b79008f281df6051ffc6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfac8192f55fa42d33729a9ed551f86e98a96631db59b79008f281df6051ffc6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfac8192f55fa42d33729a9ed551f86e98a96631db59b79008f281df6051ffc6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:31 compute-0 podman[93346]: 2025-11-26 11:39:31.975401658 +0000 UTC m=+0.086794365 container init 0c791639780efc27c6a2831ec8eecc8e9e0f906b63ff8a3d6647bf89b4662d8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_colden, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:39:31 compute-0 ceph-mon[74928]: purged_snaps scrub starts
Nov 26 11:39:31 compute-0 ceph-mon[74928]: purged_snaps scrub ok
Nov 26 11:39:31 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 11:39:31 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:31 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:31 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:31 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:31 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 26 11:39:31 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 26 11:39:31 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 26 11:39:31 compute-0 ceph-mon[74928]: Adjusting osd_memory_target on compute-0 to 43933k
Nov 26 11:39:31 compute-0 ceph-mon[74928]: Unable to set osd_memory_target on compute-0 to 44987733: error parsing value: Value '44987733' is below minimum 939524096
Nov 26 11:39:31 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:39:31 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 11:39:31 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:31 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 11:39:31 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 11:39:31 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:39:31 compute-0 ceph-mon[74928]: pgmap v30: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Nov 26 11:39:31 compute-0 ceph-mgr[75197]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/1840019852; not ready for session (expect reconnect)
Nov 26 11:39:31 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 26 11:39:31 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 11:39:31 compute-0 ceph-mgr[75197]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 26 11:39:31 compute-0 podman[93346]: 2025-11-26 11:39:31.983546935 +0000 UTC m=+0.094939643 container start 0c791639780efc27c6a2831ec8eecc8e9e0f906b63ff8a3d6647bf89b4662d8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:39:31 compute-0 podman[93346]: 2025-11-26 11:39:31.984707983 +0000 UTC m=+0.096100712 container attach 0c791639780efc27c6a2831ec8eecc8e9e0f906b63ff8a3d6647bf89b4662d8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_colden, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:39:32 compute-0 podman[93346]: 2025-11-26 11:39:31.907395299 +0000 UTC m=+0.018788017 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:39:32 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Nov 26 11:39:32 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e15 e15: 3 total, 3 up, 3 in
Nov 26 11:39:32 compute-0 ceph-mon[74928]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.100:6810/1840019852,v1:192.168.122.100:6811/1840019852] boot
Nov 26 11:39:32 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 3 up, 3 in
Nov 26 11:39:32 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 26 11:39:32 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 11:39:32 compute-0 ceph-osd[90047]: osd.2 15 state: booting -> active
Nov 26 11:39:32 compute-0 stupefied_colden[93359]: --> passed data devices: 0 physical, 3 LVM
Nov 26 11:39:32 compute-0 stupefied_colden[93359]: --> relative data size: 1.0
Nov 26 11:39:32 compute-0 stupefied_colden[93359]: --> All data devices are unavailable
Nov 26 11:39:32 compute-0 systemd[1]: libpod-0c791639780efc27c6a2831ec8eecc8e9e0f906b63ff8a3d6647bf89b4662d8c.scope: Deactivated successfully.
Nov 26 11:39:32 compute-0 podman[93346]: 2025-11-26 11:39:32.790969161 +0000 UTC m=+0.902361869 container died 0c791639780efc27c6a2831ec8eecc8e9e0f906b63ff8a3d6647bf89b4662d8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 26 11:39:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-dfac8192f55fa42d33729a9ed551f86e98a96631db59b79008f281df6051ffc6-merged.mount: Deactivated successfully.
Nov 26 11:39:32 compute-0 podman[93346]: 2025-11-26 11:39:32.820449541 +0000 UTC m=+0.931842249 container remove 0c791639780efc27c6a2831ec8eecc8e9e0f906b63ff8a3d6647bf89b4662d8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 26 11:39:32 compute-0 systemd[1]: libpod-conmon-0c791639780efc27c6a2831ec8eecc8e9e0f906b63ff8a3d6647bf89b4662d8c.scope: Deactivated successfully.
Nov 26 11:39:32 compute-0 sudo[93253]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:32 compute-0 sudo[93398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:39:32 compute-0 sudo[93398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:32 compute-0 sudo[93398]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:32 compute-0 sudo[93423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:39:32 compute-0 sudo[93423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:32 compute-0 sudo[93423]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:32 compute-0 sudo[93448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:39:32 compute-0 sudo[93448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:32 compute-0 sudo[93448]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:32 compute-0 ceph-mon[74928]: OSD bench result of 24201.080778 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 26 11:39:32 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 11:39:32 compute-0 ceph-mon[74928]: osd.2 [v2:192.168.122.100:6810/1840019852,v1:192.168.122.100:6811/1840019852] boot
Nov 26 11:39:32 compute-0 ceph-mon[74928]: osdmap e15: 3 total, 3 up, 3 in
Nov 26 11:39:32 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 11:39:33 compute-0 sudo[93473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -- lvm list --format json
Nov 26 11:39:33 compute-0 sudo[93473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:33 compute-0 podman[93530]: 2025-11-26 11:39:33.227418616 +0000 UTC m=+0.025255321 container create eb6cc2a4e5f6a4d38d0234b0b0e2ca1ec172384c850303ca3b89b4d451399caf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_haibt, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 26 11:39:33 compute-0 systemd[1]: Started libpod-conmon-eb6cc2a4e5f6a4d38d0234b0b0e2ca1ec172384c850303ca3b89b4d451399caf.scope.
Nov 26 11:39:33 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:39:33 compute-0 podman[93530]: 2025-11-26 11:39:33.281108739 +0000 UTC m=+0.078945464 container init eb6cc2a4e5f6a4d38d0234b0b0e2ca1ec172384c850303ca3b89b4d451399caf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_haibt, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:39:33 compute-0 podman[93530]: 2025-11-26 11:39:33.285966498 +0000 UTC m=+0.083803202 container start eb6cc2a4e5f6a4d38d0234b0b0e2ca1ec172384c850303ca3b89b4d451399caf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:39:33 compute-0 podman[93530]: 2025-11-26 11:39:33.286916383 +0000 UTC m=+0.084753087 container attach eb6cc2a4e5f6a4d38d0234b0b0e2ca1ec172384c850303ca3b89b4d451399caf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_haibt, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 26 11:39:33 compute-0 jovial_haibt[93543]: 167 167
Nov 26 11:39:33 compute-0 systemd[1]: libpod-eb6cc2a4e5f6a4d38d0234b0b0e2ca1ec172384c850303ca3b89b4d451399caf.scope: Deactivated successfully.
Nov 26 11:39:33 compute-0 podman[93530]: 2025-11-26 11:39:33.289102618 +0000 UTC m=+0.086939322 container died eb6cc2a4e5f6a4d38d0234b0b0e2ca1ec172384c850303ca3b89b4d451399caf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_haibt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 26 11:39:33 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Nov 26 11:39:33 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e16 e16: 3 total, 3 up, 3 in
Nov 26 11:39:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-50cfe7b39c1779706a2e463b2169b68d8973dbdd70c2db56e2fb1c3fb5249b42-merged.mount: Deactivated successfully.
Nov 26 11:39:33 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 3 up, 3 in
Nov 26 11:39:33 compute-0 podman[93530]: 2025-11-26 11:39:33.307408384 +0000 UTC m=+0.105245088 container remove eb6cc2a4e5f6a4d38d0234b0b0e2ca1ec172384c850303ca3b89b4d451399caf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_haibt, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 26 11:39:33 compute-0 podman[93530]: 2025-11-26 11:39:33.217190541 +0000 UTC m=+0.015027265 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:39:33 compute-0 systemd[1]: libpod-conmon-eb6cc2a4e5f6a4d38d0234b0b0e2ca1ec172384c850303ca3b89b4d451399caf.scope: Deactivated successfully.
Nov 26 11:39:33 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v33: 1 pgs: 1 creating+peering; 0 B data, 1.2 GiB used, 59 GiB / 60 GiB avail
Nov 26 11:39:33 compute-0 podman[93565]: 2025-11-26 11:39:33.413811673 +0000 UTC m=+0.025990395 container create dfb1d6bb150275c00e8621c0dab9dd37983a931bb3357763f214acb31563c749 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_ride, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 26 11:39:33 compute-0 systemd[1]: Started libpod-conmon-dfb1d6bb150275c00e8621c0dab9dd37983a931bb3357763f214acb31563c749.scope.
Nov 26 11:39:33 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:39:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df3153b0520178c4966b8ba2e2ed1337d60751f81c2e30888f6f736faf379b2f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df3153b0520178c4966b8ba2e2ed1337d60751f81c2e30888f6f736faf379b2f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df3153b0520178c4966b8ba2e2ed1337d60751f81c2e30888f6f736faf379b2f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df3153b0520178c4966b8ba2e2ed1337d60751f81c2e30888f6f736faf379b2f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:33 compute-0 podman[93565]: 2025-11-26 11:39:33.469975381 +0000 UTC m=+0.082154113 container init dfb1d6bb150275c00e8621c0dab9dd37983a931bb3357763f214acb31563c749 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_ride, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:39:33 compute-0 podman[93565]: 2025-11-26 11:39:33.474762304 +0000 UTC m=+0.086941016 container start dfb1d6bb150275c00e8621c0dab9dd37983a931bb3357763f214acb31563c749 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_ride, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:39:33 compute-0 podman[93565]: 2025-11-26 11:39:33.477415924 +0000 UTC m=+0.089594636 container attach dfb1d6bb150275c00e8621c0dab9dd37983a931bb3357763f214acb31563c749 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_ride, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:39:33 compute-0 podman[93565]: 2025-11-26 11:39:33.403548891 +0000 UTC m=+0.015727623 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:39:34 compute-0 elated_ride[93578]: {
Nov 26 11:39:34 compute-0 elated_ride[93578]:     "0": [
Nov 26 11:39:34 compute-0 elated_ride[93578]:         {
Nov 26 11:39:34 compute-0 elated_ride[93578]:             "devices": [
Nov 26 11:39:34 compute-0 elated_ride[93578]:                 "/dev/loop3"
Nov 26 11:39:34 compute-0 elated_ride[93578]:             ],
Nov 26 11:39:34 compute-0 elated_ride[93578]:             "lv_name": "ceph_lv0",
Nov 26 11:39:34 compute-0 elated_ride[93578]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:39:34 compute-0 elated_ride[93578]:             "lv_size": "21470642176",
Nov 26 11:39:34 compute-0 elated_ride[93578]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a9ad59a0-aa2e-4d92-b571-519d2d145b6a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:39:34 compute-0 elated_ride[93578]:             "lv_uuid": "Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ",
Nov 26 11:39:34 compute-0 elated_ride[93578]:             "name": "ceph_lv0",
Nov 26 11:39:34 compute-0 elated_ride[93578]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:39:34 compute-0 elated_ride[93578]:             "tags": {
Nov 26 11:39:34 compute-0 elated_ride[93578]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:39:34 compute-0 elated_ride[93578]:                 "ceph.block_uuid": "Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ",
Nov 26 11:39:34 compute-0 elated_ride[93578]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:39:34 compute-0 elated_ride[93578]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:39:34 compute-0 elated_ride[93578]:                 "ceph.cluster_name": "ceph",
Nov 26 11:39:34 compute-0 elated_ride[93578]:                 "ceph.crush_device_class": "",
Nov 26 11:39:34 compute-0 elated_ride[93578]:                 "ceph.encrypted": "0",
Nov 26 11:39:34 compute-0 elated_ride[93578]:                 "ceph.osd_fsid": "a9ad59a0-aa2e-4d92-b571-519d2d145b6a",
Nov 26 11:39:34 compute-0 elated_ride[93578]:                 "ceph.osd_id": "0",
Nov 26 11:39:34 compute-0 elated_ride[93578]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:39:34 compute-0 elated_ride[93578]:                 "ceph.type": "block",
Nov 26 11:39:34 compute-0 elated_ride[93578]:                 "ceph.vdo": "0"
Nov 26 11:39:34 compute-0 elated_ride[93578]:             },
Nov 26 11:39:34 compute-0 elated_ride[93578]:             "type": "block",
Nov 26 11:39:34 compute-0 elated_ride[93578]:             "vg_name": "ceph_vg0"
Nov 26 11:39:34 compute-0 elated_ride[93578]:         }
Nov 26 11:39:34 compute-0 elated_ride[93578]:     ],
Nov 26 11:39:34 compute-0 elated_ride[93578]:     "1": [
Nov 26 11:39:34 compute-0 elated_ride[93578]:         {
Nov 26 11:39:34 compute-0 elated_ride[93578]:             "devices": [
Nov 26 11:39:34 compute-0 elated_ride[93578]:                 "/dev/loop4"
Nov 26 11:39:34 compute-0 elated_ride[93578]:             ],
Nov 26 11:39:34 compute-0 elated_ride[93578]:             "lv_name": "ceph_lv1",
Nov 26 11:39:34 compute-0 elated_ride[93578]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:39:34 compute-0 elated_ride[93578]:             "lv_size": "21470642176",
Nov 26 11:39:34 compute-0 elated_ride[93578]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2627095b-eef8-4027-bfef-68bf7cb6801f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:39:34 compute-0 elated_ride[93578]:             "lv_uuid": "T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS",
Nov 26 11:39:34 compute-0 elated_ride[93578]:             "name": "ceph_lv1",
Nov 26 11:39:34 compute-0 elated_ride[93578]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:39:34 compute-0 elated_ride[93578]:             "tags": {
Nov 26 11:39:34 compute-0 elated_ride[93578]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:39:34 compute-0 elated_ride[93578]:                 "ceph.block_uuid": "T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS",
Nov 26 11:39:34 compute-0 elated_ride[93578]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:39:34 compute-0 elated_ride[93578]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:39:34 compute-0 elated_ride[93578]:                 "ceph.cluster_name": "ceph",
Nov 26 11:39:34 compute-0 elated_ride[93578]:                 "ceph.crush_device_class": "",
Nov 26 11:39:34 compute-0 elated_ride[93578]:                 "ceph.encrypted": "0",
Nov 26 11:39:34 compute-0 elated_ride[93578]:                 "ceph.osd_fsid": "2627095b-eef8-4027-bfef-68bf7cb6801f",
Nov 26 11:39:34 compute-0 elated_ride[93578]:                 "ceph.osd_id": "1",
Nov 26 11:39:34 compute-0 elated_ride[93578]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:39:34 compute-0 elated_ride[93578]:                 "ceph.type": "block",
Nov 26 11:39:34 compute-0 elated_ride[93578]:                 "ceph.vdo": "0"
Nov 26 11:39:34 compute-0 elated_ride[93578]:             },
Nov 26 11:39:34 compute-0 elated_ride[93578]:             "type": "block",
Nov 26 11:39:34 compute-0 elated_ride[93578]:             "vg_name": "ceph_vg1"
Nov 26 11:39:34 compute-0 elated_ride[93578]:         }
Nov 26 11:39:34 compute-0 elated_ride[93578]:     ],
Nov 26 11:39:34 compute-0 elated_ride[93578]:     "2": [
Nov 26 11:39:34 compute-0 elated_ride[93578]:         {
Nov 26 11:39:34 compute-0 elated_ride[93578]:             "devices": [
Nov 26 11:39:34 compute-0 elated_ride[93578]:                 "/dev/loop5"
Nov 26 11:39:34 compute-0 elated_ride[93578]:             ],
Nov 26 11:39:34 compute-0 elated_ride[93578]:             "lv_name": "ceph_lv2",
Nov 26 11:39:34 compute-0 elated_ride[93578]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:39:34 compute-0 elated_ride[93578]:             "lv_size": "21470642176",
Nov 26 11:39:34 compute-0 elated_ride[93578]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d56156fb-7361-4bef-b06b-1320109b4323,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:39:34 compute-0 elated_ride[93578]:             "lv_uuid": "PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF",
Nov 26 11:39:34 compute-0 elated_ride[93578]:             "name": "ceph_lv2",
Nov 26 11:39:34 compute-0 elated_ride[93578]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:39:34 compute-0 elated_ride[93578]:             "tags": {
Nov 26 11:39:34 compute-0 elated_ride[93578]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:39:34 compute-0 elated_ride[93578]:                 "ceph.block_uuid": "PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF",
Nov 26 11:39:34 compute-0 elated_ride[93578]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:39:34 compute-0 elated_ride[93578]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:39:34 compute-0 elated_ride[93578]:                 "ceph.cluster_name": "ceph",
Nov 26 11:39:34 compute-0 elated_ride[93578]:                 "ceph.crush_device_class": "",
Nov 26 11:39:34 compute-0 elated_ride[93578]:                 "ceph.encrypted": "0",
Nov 26 11:39:34 compute-0 elated_ride[93578]:                 "ceph.osd_fsid": "d56156fb-7361-4bef-b06b-1320109b4323",
Nov 26 11:39:34 compute-0 elated_ride[93578]:                 "ceph.osd_id": "2",
Nov 26 11:39:34 compute-0 elated_ride[93578]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:39:34 compute-0 elated_ride[93578]:                 "ceph.type": "block",
Nov 26 11:39:34 compute-0 elated_ride[93578]:                 "ceph.vdo": "0"
Nov 26 11:39:34 compute-0 elated_ride[93578]:             },
Nov 26 11:39:34 compute-0 elated_ride[93578]:             "type": "block",
Nov 26 11:39:34 compute-0 elated_ride[93578]:             "vg_name": "ceph_vg2"
Nov 26 11:39:34 compute-0 elated_ride[93578]:         }
Nov 26 11:39:34 compute-0 elated_ride[93578]:     ]
Nov 26 11:39:34 compute-0 elated_ride[93578]: }
Nov 26 11:39:34 compute-0 systemd[1]: libpod-dfb1d6bb150275c00e8621c0dab9dd37983a931bb3357763f214acb31563c749.scope: Deactivated successfully.
Nov 26 11:39:34 compute-0 conmon[93578]: conmon dfb1d6bb150275c00e86 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dfb1d6bb150275c00e8621c0dab9dd37983a931bb3357763f214acb31563c749.scope/container/memory.events
Nov 26 11:39:34 compute-0 podman[93565]: 2025-11-26 11:39:34.102779491 +0000 UTC m=+0.714958202 container died dfb1d6bb150275c00e8621c0dab9dd37983a931bb3357763f214acb31563c749 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:39:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-df3153b0520178c4966b8ba2e2ed1337d60751f81c2e30888f6f736faf379b2f-merged.mount: Deactivated successfully.
Nov 26 11:39:34 compute-0 podman[93565]: 2025-11-26 11:39:34.14110773 +0000 UTC m=+0.753286442 container remove dfb1d6bb150275c00e8621c0dab9dd37983a931bb3357763f214acb31563c749 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:39:34 compute-0 systemd[1]: libpod-conmon-dfb1d6bb150275c00e8621c0dab9dd37983a931bb3357763f214acb31563c749.scope: Deactivated successfully.
Nov 26 11:39:34 compute-0 sudo[93473]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:34 compute-0 sudo[93598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:39:34 compute-0 sudo[93598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:34 compute-0 sudo[93598]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:34 compute-0 sudo[93623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:39:34 compute-0 sudo[93623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:34 compute-0 sudo[93623]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:34 compute-0 sudo[93648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:39:34 compute-0 sudo[93648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:34 compute-0 sudo[93648]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:34 compute-0 ceph-mon[74928]: osdmap e16: 3 total, 3 up, 3 in
Nov 26 11:39:34 compute-0 ceph-mon[74928]: pgmap v33: 1 pgs: 1 creating+peering; 0 B data, 1.2 GiB used, 59 GiB / 60 GiB avail
Nov 26 11:39:34 compute-0 sudo[93673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -- raw list --format json
Nov 26 11:39:34 compute-0 sudo[93673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:34 compute-0 podman[93730]: 2025-11-26 11:39:34.571853843 +0000 UTC m=+0.025960147 container create be485a3101fe65ddb98288df35938b2d85b00600f758f07965decf276a26502c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_banach, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:39:34 compute-0 systemd[1]: Started libpod-conmon-be485a3101fe65ddb98288df35938b2d85b00600f758f07965decf276a26502c.scope.
Nov 26 11:39:34 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:39:34 compute-0 podman[93730]: 2025-11-26 11:39:34.631731053 +0000 UTC m=+0.085837377 container init be485a3101fe65ddb98288df35938b2d85b00600f758f07965decf276a26502c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 26 11:39:34 compute-0 podman[93730]: 2025-11-26 11:39:34.636190482 +0000 UTC m=+0.090296786 container start be485a3101fe65ddb98288df35938b2d85b00600f758f07965decf276a26502c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_banach, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:39:34 compute-0 podman[93730]: 2025-11-26 11:39:34.638106532 +0000 UTC m=+0.092212835 container attach be485a3101fe65ddb98288df35938b2d85b00600f758f07965decf276a26502c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_banach, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 26 11:39:34 compute-0 zealous_banach[93744]: 167 167
Nov 26 11:39:34 compute-0 systemd[1]: libpod-be485a3101fe65ddb98288df35938b2d85b00600f758f07965decf276a26502c.scope: Deactivated successfully.
Nov 26 11:39:34 compute-0 conmon[93744]: conmon be485a3101fe65ddb982 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-be485a3101fe65ddb98288df35938b2d85b00600f758f07965decf276a26502c.scope/container/memory.events
Nov 26 11:39:34 compute-0 podman[93730]: 2025-11-26 11:39:34.640475837 +0000 UTC m=+0.094582141 container died be485a3101fe65ddb98288df35938b2d85b00600f758f07965decf276a26502c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_banach, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 26 11:39:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a232adf518c7ce56bc3725bcd9090af35fdfc3d998b5593efcc1c0d3656c033-merged.mount: Deactivated successfully.
Nov 26 11:39:34 compute-0 podman[93730]: 2025-11-26 11:39:34.561788697 +0000 UTC m=+0.015895021 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:39:34 compute-0 podman[93730]: 2025-11-26 11:39:34.660583805 +0000 UTC m=+0.114690109 container remove be485a3101fe65ddb98288df35938b2d85b00600f758f07965decf276a26502c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_banach, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507)
Nov 26 11:39:34 compute-0 systemd[1]: libpod-conmon-be485a3101fe65ddb98288df35938b2d85b00600f758f07965decf276a26502c.scope: Deactivated successfully.
Nov 26 11:39:34 compute-0 podman[93766]: 2025-11-26 11:39:34.77381752 +0000 UTC m=+0.026744886 container create fa54468c41520c8e1a5dd549a145cfdad871339d7a598dae97bd445e1d588e0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bhabha, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 26 11:39:34 compute-0 systemd[1]: Started libpod-conmon-fa54468c41520c8e1a5dd549a145cfdad871339d7a598dae97bd445e1d588e0e.scope.
Nov 26 11:39:34 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:39:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aba102a31c4d6b87fd883c0e8c27369e9d0f131caafb637216d9f1a8ce78ae55/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aba102a31c4d6b87fd883c0e8c27369e9d0f131caafb637216d9f1a8ce78ae55/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aba102a31c4d6b87fd883c0e8c27369e9d0f131caafb637216d9f1a8ce78ae55/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aba102a31c4d6b87fd883c0e8c27369e9d0f131caafb637216d9f1a8ce78ae55/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:34 compute-0 podman[93766]: 2025-11-26 11:39:34.833107268 +0000 UTC m=+0.086034634 container init fa54468c41520c8e1a5dd549a145cfdad871339d7a598dae97bd445e1d588e0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bhabha, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 26 11:39:34 compute-0 podman[93766]: 2025-11-26 11:39:34.838705943 +0000 UTC m=+0.091633319 container start fa54468c41520c8e1a5dd549a145cfdad871339d7a598dae97bd445e1d588e0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bhabha, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:39:34 compute-0 podman[93766]: 2025-11-26 11:39:34.839814331 +0000 UTC m=+0.092741706 container attach fa54468c41520c8e1a5dd549a145cfdad871339d7a598dae97bd445e1d588e0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:39:34 compute-0 podman[93766]: 2025-11-26 11:39:34.762763927 +0000 UTC m=+0.015691312 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:39:34 compute-0 sudo[93808]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzgtmbsanmkhqyigxlqewicojxmkbcyk ; /usr/bin/python3'
Nov 26 11:39:35 compute-0 sudo[93808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:39:35 compute-0 python3[93810]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:39:35 compute-0 podman[93812]: 2025-11-26 11:39:35.152960867 +0000 UTC m=+0.026354610 container create 57bc9edfd403dfa0f07e8811117d1fce20cfa83cc60167ea7cf4338d7c9bb774 (image=quay.io/ceph/ceph:v18, name=relaxed_shaw, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 26 11:39:35 compute-0 systemd[1]: Started libpod-conmon-57bc9edfd403dfa0f07e8811117d1fce20cfa83cc60167ea7cf4338d7c9bb774.scope.
Nov 26 11:39:35 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:39:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a916ae31668d189ac7e03785a3cc3e46714e7735236b936795cd7f3519763d4/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a916ae31668d189ac7e03785a3cc3e46714e7735236b936795cd7f3519763d4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a916ae31668d189ac7e03785a3cc3e46714e7735236b936795cd7f3519763d4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:35 compute-0 podman[93812]: 2025-11-26 11:39:35.208252669 +0000 UTC m=+0.081646433 container init 57bc9edfd403dfa0f07e8811117d1fce20cfa83cc60167ea7cf4338d7c9bb774 (image=quay.io/ceph/ceph:v18, name=relaxed_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 26 11:39:35 compute-0 podman[93812]: 2025-11-26 11:39:35.212278829 +0000 UTC m=+0.085672563 container start 57bc9edfd403dfa0f07e8811117d1fce20cfa83cc60167ea7cf4338d7c9bb774 (image=quay.io/ceph/ceph:v18, name=relaxed_shaw, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 26 11:39:35 compute-0 podman[93812]: 2025-11-26 11:39:35.213345096 +0000 UTC m=+0.086738841 container attach 57bc9edfd403dfa0f07e8811117d1fce20cfa83cc60167ea7cf4338d7c9bb774 (image=quay.io/ceph/ceph:v18, name=relaxed_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 26 11:39:35 compute-0 podman[93812]: 2025-11-26 11:39:35.142288311 +0000 UTC m=+0.015682076 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:39:35 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v34: 1 pgs: 1 active+clean; 449 KiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Nov 26 11:39:35 compute-0 hardcore_bhabha[93780]: {
Nov 26 11:39:35 compute-0 hardcore_bhabha[93780]:     "2627095b-eef8-4027-bfef-68bf7cb6801f": {
Nov 26 11:39:35 compute-0 hardcore_bhabha[93780]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:39:35 compute-0 hardcore_bhabha[93780]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 11:39:35 compute-0 hardcore_bhabha[93780]:         "osd_id": 1,
Nov 26 11:39:35 compute-0 hardcore_bhabha[93780]:         "osd_uuid": "2627095b-eef8-4027-bfef-68bf7cb6801f",
Nov 26 11:39:35 compute-0 hardcore_bhabha[93780]:         "type": "bluestore"
Nov 26 11:39:35 compute-0 hardcore_bhabha[93780]:     },
Nov 26 11:39:35 compute-0 hardcore_bhabha[93780]:     "a9ad59a0-aa2e-4d92-b571-519d2d145b6a": {
Nov 26 11:39:35 compute-0 hardcore_bhabha[93780]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:39:35 compute-0 hardcore_bhabha[93780]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 11:39:35 compute-0 hardcore_bhabha[93780]:         "osd_id": 0,
Nov 26 11:39:35 compute-0 hardcore_bhabha[93780]:         "osd_uuid": "a9ad59a0-aa2e-4d92-b571-519d2d145b6a",
Nov 26 11:39:35 compute-0 hardcore_bhabha[93780]:         "type": "bluestore"
Nov 26 11:39:35 compute-0 hardcore_bhabha[93780]:     },
Nov 26 11:39:35 compute-0 hardcore_bhabha[93780]:     "d56156fb-7361-4bef-b06b-1320109b4323": {
Nov 26 11:39:35 compute-0 hardcore_bhabha[93780]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:39:35 compute-0 hardcore_bhabha[93780]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 11:39:35 compute-0 hardcore_bhabha[93780]:         "osd_id": 2,
Nov 26 11:39:35 compute-0 hardcore_bhabha[93780]:         "osd_uuid": "d56156fb-7361-4bef-b06b-1320109b4323",
Nov 26 11:39:35 compute-0 hardcore_bhabha[93780]:         "type": "bluestore"
Nov 26 11:39:35 compute-0 hardcore_bhabha[93780]:     }
Nov 26 11:39:35 compute-0 hardcore_bhabha[93780]: }
Nov 26 11:39:35 compute-0 systemd[1]: libpod-fa54468c41520c8e1a5dd549a145cfdad871339d7a598dae97bd445e1d588e0e.scope: Deactivated successfully.
Nov 26 11:39:35 compute-0 conmon[93780]: conmon fa54468c41520c8e1a5d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fa54468c41520c8e1a5dd549a145cfdad871339d7a598dae97bd445e1d588e0e.scope/container/memory.events
Nov 26 11:39:35 compute-0 podman[93877]: 2025-11-26 11:39:35.631886897 +0000 UTC m=+0.017085385 container died fa54468c41520c8e1a5dd549a145cfdad871339d7a598dae97bd445e1d588e0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bhabha, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 26 11:39:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-aba102a31c4d6b87fd883c0e8c27369e9d0f131caafb637216d9f1a8ce78ae55-merged.mount: Deactivated successfully.
Nov 26 11:39:35 compute-0 podman[93877]: 2025-11-26 11:39:35.662029421 +0000 UTC m=+0.047227891 container remove fa54468c41520c8e1a5dd549a145cfdad871339d7a598dae97bd445e1d588e0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bhabha, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:39:35 compute-0 systemd[1]: libpod-conmon-fa54468c41520c8e1a5dd549a145cfdad871339d7a598dae97bd445e1d588e0e.scope: Deactivated successfully.
Nov 26 11:39:35 compute-0 sudo[93673]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:35 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 11:39:35 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:35 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 11:39:35 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:35 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 26 11:39:35 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1101063418' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 26 11:39:35 compute-0 relaxed_shaw[93826]: 
Nov 26 11:39:35 compute-0 relaxed_shaw[93826]: {"fsid":"ebab460c-3fd7-5f66-aa87-e10c143123f7","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":94,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":16,"num_osds":3,"num_up_osds":3,"osd_up_since":1764157172,"num_in_osds":3,"osd_in_since":1764157152,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":1}],"num_pgs":1,"num_pools":1,"num_objects":2,"data_bytes":459280,"bytes_used":1341693952,"bytes_avail":63070232576,"bytes_total":64411926528},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-11-26T11:37:59.453002+0000","services":{}},"progress_events":{}}
Nov 26 11:39:35 compute-0 systemd[1]: libpod-57bc9edfd403dfa0f07e8811117d1fce20cfa83cc60167ea7cf4338d7c9bb774.scope: Deactivated successfully.
Nov 26 11:39:35 compute-0 podman[93812]: 2025-11-26 11:39:35.714903726 +0000 UTC m=+0.588297470 container died 57bc9edfd403dfa0f07e8811117d1fce20cfa83cc60167ea7cf4338d7c9bb774 (image=quay.io/ceph/ceph:v18, name=relaxed_shaw, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 26 11:39:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a916ae31668d189ac7e03785a3cc3e46714e7735236b936795cd7f3519763d4-merged.mount: Deactivated successfully.
Nov 26 11:39:35 compute-0 sudo[93889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:39:35 compute-0 sudo[93889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:35 compute-0 podman[93812]: 2025-11-26 11:39:35.741583106 +0000 UTC m=+0.614976849 container remove 57bc9edfd403dfa0f07e8811117d1fce20cfa83cc60167ea7cf4338d7c9bb774 (image=quay.io/ceph/ceph:v18, name=relaxed_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:39:35 compute-0 sudo[93889]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:35 compute-0 systemd[1]: libpod-conmon-57bc9edfd403dfa0f07e8811117d1fce20cfa83cc60167ea7cf4338d7c9bb774.scope: Deactivated successfully.
Nov 26 11:39:35 compute-0 sudo[93808]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:35 compute-0 sudo[93926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 26 11:39:35 compute-0 sudo[93926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:35 compute-0 sudo[93926]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:35 compute-0 sudo[93951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:39:35 compute-0 sudo[93951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:35 compute-0 sudo[93951]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:35 compute-0 sudo[93976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:39:35 compute-0 sudo[93976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:35 compute-0 sudo[93976]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:35 compute-0 sudo[94001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:39:35 compute-0 sudo[94001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:35 compute-0 sudo[94001]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:35 compute-0 sudo[94026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 26 11:39:35 compute-0 sudo[94026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:35 compute-0 sudo[94074]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tunkqqtxurlgtuulrnlcmfqgkcimvbrh ; /usr/bin/python3'
Nov 26 11:39:35 compute-0 sudo[94074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:39:36 compute-0 python3[94076]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:39:36 compute-0 podman[94106]: 2025-11-26 11:39:36.151651782 +0000 UTC m=+0.033955731 container create 1887b1ebe64c4bcbea29cfd0ce7ec667751f5a41ea5f57e312379f06a68e4f1c (image=quay.io/ceph/ceph:v18, name=zen_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 26 11:39:36 compute-0 systemd[1]: Started libpod-conmon-1887b1ebe64c4bcbea29cfd0ce7ec667751f5a41ea5f57e312379f06a68e4f1c.scope.
Nov 26 11:39:36 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:39:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd73974266387d7149c569eb1e162d913a57a681307db455663872e9c48ad9ff/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd73974266387d7149c569eb1e162d913a57a681307db455663872e9c48ad9ff/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:36 compute-0 podman[94106]: 2025-11-26 11:39:36.203691793 +0000 UTC m=+0.085995761 container init 1887b1ebe64c4bcbea29cfd0ce7ec667751f5a41ea5f57e312379f06a68e4f1c (image=quay.io/ceph/ceph:v18, name=zen_haslett, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Nov 26 11:39:36 compute-0 podman[94106]: 2025-11-26 11:39:36.208335603 +0000 UTC m=+0.090639550 container start 1887b1ebe64c4bcbea29cfd0ce7ec667751f5a41ea5f57e312379f06a68e4f1c (image=quay.io/ceph/ceph:v18, name=zen_haslett, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:39:36 compute-0 podman[94106]: 2025-11-26 11:39:36.209400277 +0000 UTC m=+0.091704245 container attach 1887b1ebe64c4bcbea29cfd0ce7ec667751f5a41ea5f57e312379f06a68e4f1c (image=quay.io/ceph/ceph:v18, name=zen_haslett, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:39:36 compute-0 podman[94106]: 2025-11-26 11:39:36.13643183 +0000 UTC m=+0.018735788 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:39:36 compute-0 podman[94152]: 2025-11-26 11:39:36.267675046 +0000 UTC m=+0.040596414 container exec 810eaed6cbde00330c0646cedaef5c2ae94579236d67ba42b8ef02d055d04ad5 (image=quay.io/ceph/ceph:v18, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True)
Nov 26 11:39:36 compute-0 podman[94152]: 2025-11-26 11:39:36.340860617 +0000 UTC m=+0.113781984 container exec_died 810eaed6cbde00330c0646cedaef5c2ae94579236d67ba42b8ef02d055d04ad5 (image=quay.io/ceph/ceph:v18, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mon-compute-0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 26 11:39:36 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e16 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:39:36 compute-0 sudo[94026]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:36 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 11:39:36 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:36 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 11:39:36 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:36 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 26 11:39:36 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3296592290' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 26 11:39:36 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 11:39:36 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:39:36 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 11:39:36 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 11:39:36 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 11:39:36 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:36 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 63f46b28-2c17-4bfc-a060-6c3e216cb72d does not exist
Nov 26 11:39:36 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev b4d0948e-4b47-4c56-a381-a36d5fb2bdbc does not exist
Nov 26 11:39:36 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev abc5f445-3220-4618-9021-88fb05450336 does not exist
Nov 26 11:39:36 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 11:39:36 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 11:39:36 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 11:39:36 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 11:39:36 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 11:39:36 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:39:36 compute-0 sudo[94275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:39:36 compute-0 sudo[94275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:36 compute-0 sudo[94275]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:36 compute-0 ceph-mon[74928]: pgmap v34: 1 pgs: 1 active+clean; 449 KiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Nov 26 11:39:36 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:36 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:36 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1101063418' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 26 11:39:36 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:36 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:36 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3296592290' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 26 11:39:36 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:39:36 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 11:39:36 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:36 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 11:39:36 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 11:39:36 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:39:36 compute-0 sudo[94300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:39:36 compute-0 sudo[94300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:36 compute-0 sudo[94300]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:36 compute-0 sudo[94325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:39:36 compute-0 sudo[94325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:36 compute-0 sudo[94325]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:36 compute-0 sudo[94350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 26 11:39:36 compute-0 sudo[94350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:37 compute-0 podman[94406]: 2025-11-26 11:39:37.013693445 +0000 UTC m=+0.025574963 container create 0ea622571ba4391d252ee46065152c5e20993b1e11e390f576c81bdf8f36d8a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_goldberg, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:39:37 compute-0 systemd[1]: Started libpod-conmon-0ea622571ba4391d252ee46065152c5e20993b1e11e390f576c81bdf8f36d8a4.scope.
Nov 26 11:39:37 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:39:37 compute-0 podman[94406]: 2025-11-26 11:39:37.069476095 +0000 UTC m=+0.081357612 container init 0ea622571ba4391d252ee46065152c5e20993b1e11e390f576c81bdf8f36d8a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_goldberg, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 26 11:39:37 compute-0 podman[94406]: 2025-11-26 11:39:37.073420429 +0000 UTC m=+0.085301945 container start 0ea622571ba4391d252ee46065152c5e20993b1e11e390f576c81bdf8f36d8a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_goldberg, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:39:37 compute-0 podman[94406]: 2025-11-26 11:39:37.074523506 +0000 UTC m=+0.086405023 container attach 0ea622571ba4391d252ee46065152c5e20993b1e11e390f576c81bdf8f36d8a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_goldberg, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 26 11:39:37 compute-0 recursing_goldberg[94420]: 167 167
Nov 26 11:39:37 compute-0 systemd[1]: libpod-0ea622571ba4391d252ee46065152c5e20993b1e11e390f576c81bdf8f36d8a4.scope: Deactivated successfully.
Nov 26 11:39:37 compute-0 podman[94406]: 2025-11-26 11:39:37.076537602 +0000 UTC m=+0.088419120 container died 0ea622571ba4391d252ee46065152c5e20993b1e11e390f576c81bdf8f36d8a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:39:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8b98c87806a43c666afec6c3f4cc185c543ee8f9156a43ee9452d044d9b00be-merged.mount: Deactivated successfully.
Nov 26 11:39:37 compute-0 podman[94406]: 2025-11-26 11:39:37.095409891 +0000 UTC m=+0.107291408 container remove 0ea622571ba4391d252ee46065152c5e20993b1e11e390f576c81bdf8f36d8a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_goldberg, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 26 11:39:37 compute-0 podman[94406]: 2025-11-26 11:39:37.002929885 +0000 UTC m=+0.014811433 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:39:37 compute-0 systemd[1]: libpod-conmon-0ea622571ba4391d252ee46065152c5e20993b1e11e390f576c81bdf8f36d8a4.scope: Deactivated successfully.
Nov 26 11:39:37 compute-0 podman[94441]: 2025-11-26 11:39:37.204356708 +0000 UTC m=+0.026942553 container create a4dcf486081df7a24a4b9c04e8193c9a1a782d5607e00c15666e3760eeef9253 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:39:37 compute-0 systemd[1]: Started libpod-conmon-a4dcf486081df7a24a4b9c04e8193c9a1a782d5607e00c15666e3760eeef9253.scope.
Nov 26 11:39:37 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:39:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85cfd62c40a354514b53cc018bdc7b1e5d7802c3ebb76ff41f7912a9b180a333/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85cfd62c40a354514b53cc018bdc7b1e5d7802c3ebb76ff41f7912a9b180a333/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85cfd62c40a354514b53cc018bdc7b1e5d7802c3ebb76ff41f7912a9b180a333/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85cfd62c40a354514b53cc018bdc7b1e5d7802c3ebb76ff41f7912a9b180a333/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85cfd62c40a354514b53cc018bdc7b1e5d7802c3ebb76ff41f7912a9b180a333/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:37 compute-0 podman[94441]: 2025-11-26 11:39:37.255115562 +0000 UTC m=+0.077701406 container init a4dcf486081df7a24a4b9c04e8193c9a1a782d5607e00c15666e3760eeef9253 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hypatia, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Nov 26 11:39:37 compute-0 podman[94441]: 2025-11-26 11:39:37.263387732 +0000 UTC m=+0.085973578 container start a4dcf486081df7a24a4b9c04e8193c9a1a782d5607e00c15666e3760eeef9253 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hypatia, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:39:37 compute-0 podman[94441]: 2025-11-26 11:39:37.26488875 +0000 UTC m=+0.087474605 container attach a4dcf486081df7a24a4b9c04e8193c9a1a782d5607e00c15666e3760eeef9253 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 26 11:39:37 compute-0 podman[94441]: 2025-11-26 11:39:37.192895065 +0000 UTC m=+0.015480930 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:39:37 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v35: 1 pgs: 1 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Nov 26 11:39:37 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Nov 26 11:39:37 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3296592290' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 26 11:39:37 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e17 e17: 3 total, 3 up, 3 in
Nov 26 11:39:37 compute-0 zen_haslett[94135]: pool 'vms' created
Nov 26 11:39:37 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 3 up, 3 in
Nov 26 11:39:37 compute-0 systemd[1]: libpod-1887b1ebe64c4bcbea29cfd0ce7ec667751f5a41ea5f57e312379f06a68e4f1c.scope: Deactivated successfully.
Nov 26 11:39:37 compute-0 podman[94106]: 2025-11-26 11:39:37.645092792 +0000 UTC m=+1.527396750 container died 1887b1ebe64c4bcbea29cfd0ce7ec667751f5a41ea5f57e312379f06a68e4f1c (image=quay.io/ceph/ceph:v18, name=zen_haslett, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:39:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd73974266387d7149c569eb1e162d913a57a681307db455663872e9c48ad9ff-merged.mount: Deactivated successfully.
Nov 26 11:39:37 compute-0 podman[94106]: 2025-11-26 11:39:37.667482458 +0000 UTC m=+1.549786405 container remove 1887b1ebe64c4bcbea29cfd0ce7ec667751f5a41ea5f57e312379f06a68e4f1c (image=quay.io/ceph/ceph:v18, name=zen_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 26 11:39:37 compute-0 systemd[1]: libpod-conmon-1887b1ebe64c4bcbea29cfd0ce7ec667751f5a41ea5f57e312379f06a68e4f1c.scope: Deactivated successfully.
Nov 26 11:39:37 compute-0 sudo[94074]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:37 compute-0 sudo[94494]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmeljjtcownrbuizefzqcgihxpiqnltc ; /usr/bin/python3'
Nov 26 11:39:37 compute-0 sudo[94494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:39:37 compute-0 python3[94496]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:39:37 compute-0 podman[94505]: 2025-11-26 11:39:37.927049473 +0000 UTC m=+0.030519036 container create 20d385237669683d61ff20d6d94ebf16bd2e4fda63a25e0018bd80fe35678072 (image=quay.io/ceph/ceph:v18, name=determined_roentgen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:39:37 compute-0 systemd[1]: Started libpod-conmon-20d385237669683d61ff20d6d94ebf16bd2e4fda63a25e0018bd80fe35678072.scope.
Nov 26 11:39:37 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:39:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55d9ecf2333aec29b2ea9a86bffea187a18be4473297c845ebef9cb4b7bb12d4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55d9ecf2333aec29b2ea9a86bffea187a18be4473297c845ebef9cb4b7bb12d4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:37 compute-0 podman[94505]: 2025-11-26 11:39:37.97606679 +0000 UTC m=+0.079536363 container init 20d385237669683d61ff20d6d94ebf16bd2e4fda63a25e0018bd80fe35678072 (image=quay.io/ceph/ceph:v18, name=determined_roentgen, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 26 11:39:37 compute-0 podman[94505]: 2025-11-26 11:39:37.980840147 +0000 UTC m=+0.084309710 container start 20d385237669683d61ff20d6d94ebf16bd2e4fda63a25e0018bd80fe35678072 (image=quay.io/ceph/ceph:v18, name=determined_roentgen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 26 11:39:37 compute-0 podman[94505]: 2025-11-26 11:39:37.981807255 +0000 UTC m=+0.085276818 container attach 20d385237669683d61ff20d6d94ebf16bd2e4fda63a25e0018bd80fe35678072 (image=quay.io/ceph/ceph:v18, name=determined_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 26 11:39:38 compute-0 podman[94505]: 2025-11-26 11:39:37.916030676 +0000 UTC m=+0.019500260 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:39:38 compute-0 condescending_hypatia[94454]: --> passed data devices: 0 physical, 3 LVM
Nov 26 11:39:38 compute-0 condescending_hypatia[94454]: --> relative data size: 1.0
Nov 26 11:39:38 compute-0 condescending_hypatia[94454]: --> All data devices are unavailable
Nov 26 11:39:38 compute-0 systemd[1]: libpod-a4dcf486081df7a24a4b9c04e8193c9a1a782d5607e00c15666e3760eeef9253.scope: Deactivated successfully.
Nov 26 11:39:38 compute-0 conmon[94454]: conmon a4dcf486081df7a24a4b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a4dcf486081df7a24a4b9c04e8193c9a1a782d5607e00c15666e3760eeef9253.scope/container/memory.events
Nov 26 11:39:38 compute-0 podman[94538]: 2025-11-26 11:39:38.118187543 +0000 UTC m=+0.015778890 container died a4dcf486081df7a24a4b9c04e8193c9a1a782d5607e00c15666e3760eeef9253 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 26 11:39:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-85cfd62c40a354514b53cc018bdc7b1e5d7802c3ebb76ff41f7912a9b180a333-merged.mount: Deactivated successfully.
Nov 26 11:39:38 compute-0 podman[94538]: 2025-11-26 11:39:38.146063719 +0000 UTC m=+0.043655066 container remove a4dcf486081df7a24a4b9c04e8193c9a1a782d5607e00c15666e3760eeef9253 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 26 11:39:38 compute-0 systemd[1]: libpod-conmon-a4dcf486081df7a24a4b9c04e8193c9a1a782d5607e00c15666e3760eeef9253.scope: Deactivated successfully.
Nov 26 11:39:38 compute-0 sudo[94350]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:38 compute-0 sudo[94550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:39:38 compute-0 sudo[94550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:38 compute-0 sudo[94550]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:38 compute-0 sudo[94575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:39:38 compute-0 sudo[94575]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:38 compute-0 sudo[94575]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:38 compute-0 sudo[94619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:39:38 compute-0 sudo[94619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:38 compute-0 sudo[94619]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:38 compute-0 sudo[94644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -- lvm list --format json
Nov 26 11:39:38 compute-0 sudo[94644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:38 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 17 pg[2.0( empty local-lis/les=0/0 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [2] r=0 lpr=17 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:38 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 26 11:39:38 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/934815048' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 26 11:39:38 compute-0 podman[94702]: 2025-11-26 11:39:38.551116925 +0000 UTC m=+0.026726851 container create 0ed52ebe9e9005ed117f306ecef255a743be55ff470f6da3dcf37a2f7a3f9a8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:39:38 compute-0 systemd[1]: Started libpod-conmon-0ed52ebe9e9005ed117f306ecef255a743be55ff470f6da3dcf37a2f7a3f9a8c.scope.
Nov 26 11:39:38 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:39:38 compute-0 podman[94702]: 2025-11-26 11:39:38.597076291 +0000 UTC m=+0.072686217 container init 0ed52ebe9e9005ed117f306ecef255a743be55ff470f6da3dcf37a2f7a3f9a8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_pascal, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 26 11:39:38 compute-0 podman[94702]: 2025-11-26 11:39:38.600947346 +0000 UTC m=+0.076557272 container start 0ed52ebe9e9005ed117f306ecef255a743be55ff470f6da3dcf37a2f7a3f9a8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 26 11:39:38 compute-0 podman[94702]: 2025-11-26 11:39:38.602020796 +0000 UTC m=+0.077630742 container attach 0ed52ebe9e9005ed117f306ecef255a743be55ff470f6da3dcf37a2f7a3f9a8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_pascal, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:39:38 compute-0 exciting_pascal[94715]: 167 167
Nov 26 11:39:38 compute-0 systemd[1]: libpod-0ed52ebe9e9005ed117f306ecef255a743be55ff470f6da3dcf37a2f7a3f9a8c.scope: Deactivated successfully.
Nov 26 11:39:38 compute-0 podman[94702]: 2025-11-26 11:39:38.604129033 +0000 UTC m=+0.079738959 container died 0ed52ebe9e9005ed117f306ecef255a743be55ff470f6da3dcf37a2f7a3f9a8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_pascal, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:39:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-6545f475b7c38e8ef78e3ed005c13917d512fe0ef429cba0d9eb2e41695828d5-merged.mount: Deactivated successfully.
Nov 26 11:39:38 compute-0 podman[94702]: 2025-11-26 11:39:38.622518388 +0000 UTC m=+0.098128315 container remove 0ed52ebe9e9005ed117f306ecef255a743be55ff470f6da3dcf37a2f7a3f9a8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_pascal, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 26 11:39:38 compute-0 podman[94702]: 2025-11-26 11:39:38.539960966 +0000 UTC m=+0.015570903 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:39:38 compute-0 systemd[1]: libpod-conmon-0ed52ebe9e9005ed117f306ecef255a743be55ff470f6da3dcf37a2f7a3f9a8c.scope: Deactivated successfully.
Nov 26 11:39:38 compute-0 ceph-mon[74928]: pgmap v35: 1 pgs: 1 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Nov 26 11:39:38 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3296592290' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 26 11:39:38 compute-0 ceph-mon[74928]: osdmap e17: 3 total, 3 up, 3 in
Nov 26 11:39:38 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/934815048' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 26 11:39:38 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Nov 26 11:39:38 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/934815048' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 26 11:39:38 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e18 e18: 3 total, 3 up, 3 in
Nov 26 11:39:38 compute-0 determined_roentgen[94524]: pool 'volumes' created
Nov 26 11:39:38 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 3 up, 3 in
Nov 26 11:39:38 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 18 pg[3.0( empty local-lis/les=0/0 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [1] r=0 lpr=18 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:38 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 18 pg[2.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [2] r=0 lpr=17 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:38 compute-0 systemd[1]: libpod-20d385237669683d61ff20d6d94ebf16bd2e4fda63a25e0018bd80fe35678072.scope: Deactivated successfully.
Nov 26 11:39:38 compute-0 podman[94505]: 2025-11-26 11:39:38.712430538 +0000 UTC m=+0.815900101 container died 20d385237669683d61ff20d6d94ebf16bd2e4fda63a25e0018bd80fe35678072 (image=quay.io/ceph/ceph:v18, name=determined_roentgen, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:39:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-55d9ecf2333aec29b2ea9a86bffea187a18be4473297c845ebef9cb4b7bb12d4-merged.mount: Deactivated successfully.
Nov 26 11:39:38 compute-0 podman[94505]: 2025-11-26 11:39:38.742182497 +0000 UTC m=+0.845652060 container remove 20d385237669683d61ff20d6d94ebf16bd2e4fda63a25e0018bd80fe35678072 (image=quay.io/ceph/ceph:v18, name=determined_roentgen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:39:38 compute-0 systemd[1]: libpod-conmon-20d385237669683d61ff20d6d94ebf16bd2e4fda63a25e0018bd80fe35678072.scope: Deactivated successfully.
Nov 26 11:39:38 compute-0 podman[94738]: 2025-11-26 11:39:38.753737287 +0000 UTC m=+0.040783681 container create 5b2fce70b27b965cc46607ac6551013b538863bfd764b99fae48b93ce3ddedbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 26 11:39:38 compute-0 sudo[94494]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:38 compute-0 systemd[1]: Started libpod-conmon-5b2fce70b27b965cc46607ac6551013b538863bfd764b99fae48b93ce3ddedbd.scope.
Nov 26 11:39:38 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:39:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/205fe2d3fb2871dcdd8d325075e12026faa20d44c94184b98a1e5d9e4cf2784a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/205fe2d3fb2871dcdd8d325075e12026faa20d44c94184b98a1e5d9e4cf2784a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/205fe2d3fb2871dcdd8d325075e12026faa20d44c94184b98a1e5d9e4cf2784a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/205fe2d3fb2871dcdd8d325075e12026faa20d44c94184b98a1e5d9e4cf2784a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:38 compute-0 podman[94738]: 2025-11-26 11:39:38.810721101 +0000 UTC m=+0.097767506 container init 5b2fce70b27b965cc46607ac6551013b538863bfd764b99fae48b93ce3ddedbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 26 11:39:38 compute-0 podman[94738]: 2025-11-26 11:39:38.815708909 +0000 UTC m=+0.102755302 container start 5b2fce70b27b965cc46607ac6551013b538863bfd764b99fae48b93ce3ddedbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_leakey, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:39:38 compute-0 podman[94738]: 2025-11-26 11:39:38.816795844 +0000 UTC m=+0.103842239 container attach 5b2fce70b27b965cc46607ac6551013b538863bfd764b99fae48b93ce3ddedbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Nov 26 11:39:38 compute-0 podman[94738]: 2025-11-26 11:39:38.729061725 +0000 UTC m=+0.016108139 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:39:38 compute-0 sudo[94789]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdekjkgawzlehybqlyzxrkdqbdekivne ; /usr/bin/python3'
Nov 26 11:39:38 compute-0 sudo[94789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:39:39 compute-0 python3[94791]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:39:39 compute-0 podman[94792]: 2025-11-26 11:39:39.046937334 +0000 UTC m=+0.029540206 container create 0128466b95e8dd73d2e86005daaddf6109e2146e507de9d7b8e6cdef4b285427 (image=quay.io/ceph/ceph:v18, name=frosty_carson, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:39:39 compute-0 systemd[1]: Started libpod-conmon-0128466b95e8dd73d2e86005daaddf6109e2146e507de9d7b8e6cdef4b285427.scope.
Nov 26 11:39:39 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:39:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7964a0361924052a9bb6223a0604c03df0c2e6e639339bb6f9e4643582c2d37c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7964a0361924052a9bb6223a0604c03df0c2e6e639339bb6f9e4643582c2d37c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:39 compute-0 podman[94792]: 2025-11-26 11:39:39.104826087 +0000 UTC m=+0.087428948 container init 0128466b95e8dd73d2e86005daaddf6109e2146e507de9d7b8e6cdef4b285427 (image=quay.io/ceph/ceph:v18, name=frosty_carson, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 26 11:39:39 compute-0 podman[94792]: 2025-11-26 11:39:39.116959223 +0000 UTC m=+0.099562084 container start 0128466b95e8dd73d2e86005daaddf6109e2146e507de9d7b8e6cdef4b285427 (image=quay.io/ceph/ceph:v18, name=frosty_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:39:39 compute-0 podman[94792]: 2025-11-26 11:39:39.117964512 +0000 UTC m=+0.100567394 container attach 0128466b95e8dd73d2e86005daaddf6109e2146e507de9d7b8e6cdef4b285427 (image=quay.io/ceph/ceph:v18, name=frosty_carson, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 26 11:39:39 compute-0 podman[94792]: 2025-11-26 11:39:39.034677076 +0000 UTC m=+0.017279957 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:39:39 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v38: 3 pgs: 1 creating+peering, 1 active+clean, 1 unknown; 449 KiB data, 79 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]: {
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:     "0": [
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:         {
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:             "devices": [
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:                 "/dev/loop3"
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:             ],
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:             "lv_name": "ceph_lv0",
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:             "lv_size": "21470642176",
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a9ad59a0-aa2e-4d92-b571-519d2d145b6a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:             "lv_uuid": "Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ",
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:             "name": "ceph_lv0",
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:             "tags": {
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:                 "ceph.block_uuid": "Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ",
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:                 "ceph.cluster_name": "ceph",
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:                 "ceph.crush_device_class": "",
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:                 "ceph.encrypted": "0",
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:                 "ceph.osd_fsid": "a9ad59a0-aa2e-4d92-b571-519d2d145b6a",
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:                 "ceph.osd_id": "0",
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:                 "ceph.type": "block",
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:                 "ceph.vdo": "0"
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:             },
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:             "type": "block",
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:             "vg_name": "ceph_vg0"
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:         }
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:     ],
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:     "1": [
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:         {
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:             "devices": [
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:                 "/dev/loop4"
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:             ],
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:             "lv_name": "ceph_lv1",
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:             "lv_size": "21470642176",
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2627095b-eef8-4027-bfef-68bf7cb6801f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:             "lv_uuid": "T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS",
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:             "name": "ceph_lv1",
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:             "tags": {
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:                 "ceph.block_uuid": "T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS",
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:                 "ceph.cluster_name": "ceph",
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:                 "ceph.crush_device_class": "",
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:                 "ceph.encrypted": "0",
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:                 "ceph.osd_fsid": "2627095b-eef8-4027-bfef-68bf7cb6801f",
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:                 "ceph.osd_id": "1",
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:                 "ceph.type": "block",
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:                 "ceph.vdo": "0"
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:             },
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:             "type": "block",
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:             "vg_name": "ceph_vg1"
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:         }
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:     ],
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:     "2": [
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:         {
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:             "devices": [
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:                 "/dev/loop5"
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:             ],
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:             "lv_name": "ceph_lv2",
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:             "lv_size": "21470642176",
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d56156fb-7361-4bef-b06b-1320109b4323,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:             "lv_uuid": "PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF",
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:             "name": "ceph_lv2",
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:             "tags": {
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:                 "ceph.block_uuid": "PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF",
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:                 "ceph.cluster_name": "ceph",
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:                 "ceph.crush_device_class": "",
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:                 "ceph.encrypted": "0",
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:                 "ceph.osd_fsid": "d56156fb-7361-4bef-b06b-1320109b4323",
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:                 "ceph.osd_id": "2",
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:                 "ceph.type": "block",
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:                 "ceph.vdo": "0"
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:             },
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:             "type": "block",
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:             "vg_name": "ceph_vg2"
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:         }
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]:     ]
Nov 26 11:39:39 compute-0 suspicious_leakey[94761]: }
Nov 26 11:39:39 compute-0 systemd[1]: libpod-5b2fce70b27b965cc46607ac6551013b538863bfd764b99fae48b93ce3ddedbd.scope: Deactivated successfully.
Nov 26 11:39:39 compute-0 podman[94738]: 2025-11-26 11:39:39.455814985 +0000 UTC m=+0.742861380 container died 5b2fce70b27b965cc46607ac6551013b538863bfd764b99fae48b93ce3ddedbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:39:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-205fe2d3fb2871dcdd8d325075e12026faa20d44c94184b98a1e5d9e4cf2784a-merged.mount: Deactivated successfully.
Nov 26 11:39:39 compute-0 podman[94738]: 2025-11-26 11:39:39.487305104 +0000 UTC m=+0.774351499 container remove 5b2fce70b27b965cc46607ac6551013b538863bfd764b99fae48b93ce3ddedbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_leakey, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 26 11:39:39 compute-0 systemd[1]: libpod-conmon-5b2fce70b27b965cc46607ac6551013b538863bfd764b99fae48b93ce3ddedbd.scope: Deactivated successfully.
Nov 26 11:39:39 compute-0 sudo[94644]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:39 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 26 11:39:39 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1410980083' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 26 11:39:39 compute-0 sudo[94841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:39:39 compute-0 sudo[94841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:39 compute-0 sudo[94841]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:39 compute-0 sudo[94869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:39:39 compute-0 sudo[94869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:39 compute-0 sudo[94869]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:39 compute-0 sudo[94894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:39:39 compute-0 sudo[94894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:39 compute-0 sudo[94894]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:39 compute-0 sudo[94919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -- raw list --format json
Nov 26 11:39:39 compute-0 sudo[94919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:39 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Nov 26 11:39:39 compute-0 ceph-mon[74928]: log_channel(cluster) log [WRN] : Health check failed: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 26 11:39:39 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1410980083' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 26 11:39:39 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e19 e19: 3 total, 3 up, 3 in
Nov 26 11:39:39 compute-0 frosty_carson[94804]: pool 'backups' created
Nov 26 11:39:39 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 3 up, 3 in
Nov 26 11:39:39 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/934815048' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 26 11:39:39 compute-0 ceph-mon[74928]: osdmap e18: 3 total, 3 up, 3 in
Nov 26 11:39:39 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1410980083' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 26 11:39:39 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 19 pg[3.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [1] r=0 lpr=18 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:39 compute-0 systemd[1]: libpod-0128466b95e8dd73d2e86005daaddf6109e2146e507de9d7b8e6cdef4b285427.scope: Deactivated successfully.
Nov 26 11:39:39 compute-0 conmon[94804]: conmon 0128466b95e8dd73d2e8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0128466b95e8dd73d2e86005daaddf6109e2146e507de9d7b8e6cdef4b285427.scope/container/memory.events
Nov 26 11:39:39 compute-0 podman[94792]: 2025-11-26 11:39:39.713243887 +0000 UTC m=+0.695846748 container died 0128466b95e8dd73d2e86005daaddf6109e2146e507de9d7b8e6cdef4b285427 (image=quay.io/ceph/ceph:v18, name=frosty_carson, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 26 11:39:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-7964a0361924052a9bb6223a0604c03df0c2e6e639339bb6f9e4643582c2d37c-merged.mount: Deactivated successfully.
Nov 26 11:39:39 compute-0 podman[94792]: 2025-11-26 11:39:39.740378587 +0000 UTC m=+0.722981447 container remove 0128466b95e8dd73d2e86005daaddf6109e2146e507de9d7b8e6cdef4b285427 (image=quay.io/ceph/ceph:v18, name=frosty_carson, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:39:39 compute-0 sudo[94789]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:39 compute-0 systemd[1]: libpod-conmon-0128466b95e8dd73d2e86005daaddf6109e2146e507de9d7b8e6cdef4b285427.scope: Deactivated successfully.
Nov 26 11:39:39 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 19 pg[4.0( empty local-lis/les=0/0 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [0] r=0 lpr=19 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:39 compute-0 sudo[94998]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efshfqydnllkvsgkvvebsqoiubkrflly ; /usr/bin/python3'
Nov 26 11:39:39 compute-0 sudo[94998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:39:39 compute-0 podman[95012]: 2025-11-26 11:39:39.911552092 +0000 UTC m=+0.025744665 container create 18e743352cd5283c967fda02da00f5641f9392eef4d7f4af8989535e4f7a6291 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:39:39 compute-0 systemd[1]: Started libpod-conmon-18e743352cd5283c967fda02da00f5641f9392eef4d7f4af8989535e4f7a6291.scope.
Nov 26 11:39:39 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:39:39 compute-0 podman[95012]: 2025-11-26 11:39:39.961483125 +0000 UTC m=+0.075675708 container init 18e743352cd5283c967fda02da00f5641f9392eef4d7f4af8989535e4f7a6291 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_jones, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 26 11:39:39 compute-0 python3[95002]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:39:39 compute-0 podman[95012]: 2025-11-26 11:39:39.966145902 +0000 UTC m=+0.080338474 container start 18e743352cd5283c967fda02da00f5641f9392eef4d7f4af8989535e4f7a6291 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_jones, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:39:39 compute-0 podman[95012]: 2025-11-26 11:39:39.967195908 +0000 UTC m=+0.081388480 container attach 18e743352cd5283c967fda02da00f5641f9392eef4d7f4af8989535e4f7a6291 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Nov 26 11:39:39 compute-0 eager_jones[95026]: 167 167
Nov 26 11:39:39 compute-0 systemd[1]: libpod-18e743352cd5283c967fda02da00f5641f9392eef4d7f4af8989535e4f7a6291.scope: Deactivated successfully.
Nov 26 11:39:39 compute-0 podman[95012]: 2025-11-26 11:39:39.969927825 +0000 UTC m=+0.084120398 container died 18e743352cd5283c967fda02da00f5641f9392eef4d7f4af8989535e4f7a6291 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_jones, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:39:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-af53bee7f684b7da1eec87ca4655b096c238e0cb38e5ba2bdccd6f93542e5e1d-merged.mount: Deactivated successfully.
Nov 26 11:39:39 compute-0 podman[95012]: 2025-11-26 11:39:39.990197832 +0000 UTC m=+0.104390406 container remove 18e743352cd5283c967fda02da00f5641f9392eef4d7f4af8989535e4f7a6291 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_jones, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:39:39 compute-0 podman[95012]: 2025-11-26 11:39:39.900884756 +0000 UTC m=+0.015077350 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:39:40 compute-0 systemd[1]: libpod-conmon-18e743352cd5283c967fda02da00f5641f9392eef4d7f4af8989535e4f7a6291.scope: Deactivated successfully.
Nov 26 11:39:40 compute-0 podman[95030]: 2025-11-26 11:39:40.006804432 +0000 UTC m=+0.031386272 container create af4fb62797d490c2baedebcd5a93b82ed7d3ffd3e774bf008deff13cf9900bfe (image=quay.io/ceph/ceph:v18, name=modest_cori, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 26 11:39:40 compute-0 systemd[1]: Started libpod-conmon-af4fb62797d490c2baedebcd5a93b82ed7d3ffd3e774bf008deff13cf9900bfe.scope.
Nov 26 11:39:40 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:39:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc7c781b59db4928df979115b8e2b5c887c1f617ac69e8fbb481187f458cddc1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc7c781b59db4928df979115b8e2b5c887c1f617ac69e8fbb481187f458cddc1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:40 compute-0 podman[95030]: 2025-11-26 11:39:40.045406356 +0000 UTC m=+0.069988225 container init af4fb62797d490c2baedebcd5a93b82ed7d3ffd3e774bf008deff13cf9900bfe (image=quay.io/ceph/ceph:v18, name=modest_cori, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:39:40 compute-0 podman[95030]: 2025-11-26 11:39:40.050724884 +0000 UTC m=+0.075306723 container start af4fb62797d490c2baedebcd5a93b82ed7d3ffd3e774bf008deff13cf9900bfe (image=quay.io/ceph/ceph:v18, name=modest_cori, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:39:40 compute-0 podman[95030]: 2025-11-26 11:39:40.051845796 +0000 UTC m=+0.076427635 container attach af4fb62797d490c2baedebcd5a93b82ed7d3ffd3e774bf008deff13cf9900bfe (image=quay.io/ceph/ceph:v18, name=modest_cori, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:39:40 compute-0 podman[95030]: 2025-11-26 11:39:39.992969797 +0000 UTC m=+0.017551656 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:39:40 compute-0 podman[95064]: 2025-11-26 11:39:40.100672058 +0000 UTC m=+0.025677287 container create 886723c08e3b4238994f0a2206d3b33040fd716e3a841d607e3fc688d264f9e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:39:40 compute-0 systemd[1]: Started libpod-conmon-886723c08e3b4238994f0a2206d3b33040fd716e3a841d607e3fc688d264f9e3.scope.
Nov 26 11:39:40 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:39:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75737d470230048fa71fc3e7f048c4570555dcc63cc7408434b70b9d0335bb37/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75737d470230048fa71fc3e7f048c4570555dcc63cc7408434b70b9d0335bb37/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75737d470230048fa71fc3e7f048c4570555dcc63cc7408434b70b9d0335bb37/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75737d470230048fa71fc3e7f048c4570555dcc63cc7408434b70b9d0335bb37/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:40 compute-0 podman[95064]: 2025-11-26 11:39:40.166488213 +0000 UTC m=+0.091493452 container init 886723c08e3b4238994f0a2206d3b33040fd716e3a841d607e3fc688d264f9e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_gagarin, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 26 11:39:40 compute-0 podman[95064]: 2025-11-26 11:39:40.171357622 +0000 UTC m=+0.096362852 container start 886723c08e3b4238994f0a2206d3b33040fd716e3a841d607e3fc688d264f9e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_gagarin, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:39:40 compute-0 podman[95064]: 2025-11-26 11:39:40.172448046 +0000 UTC m=+0.097453265 container attach 886723c08e3b4238994f0a2206d3b33040fd716e3a841d607e3fc688d264f9e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_gagarin, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 26 11:39:40 compute-0 podman[95064]: 2025-11-26 11:39:40.090374429 +0000 UTC m=+0.015379687 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:39:40 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 26 11:39:40 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/640288940' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 26 11:39:40 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Nov 26 11:39:40 compute-0 ceph-mon[74928]: pgmap v38: 3 pgs: 1 creating+peering, 1 active+clean, 1 unknown; 449 KiB data, 79 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:39:40 compute-0 ceph-mon[74928]: Health check failed: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 26 11:39:40 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1410980083' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 26 11:39:40 compute-0 ceph-mon[74928]: osdmap e19: 3 total, 3 up, 3 in
Nov 26 11:39:40 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/640288940' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 26 11:39:40 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/640288940' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 26 11:39:40 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e20 e20: 3 total, 3 up, 3 in
Nov 26 11:39:40 compute-0 modest_cori[95055]: pool 'images' created
Nov 26 11:39:40 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 3 up, 3 in
Nov 26 11:39:40 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 20 pg[4.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [0] r=0 lpr=19 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:40 compute-0 systemd[1]: libpod-af4fb62797d490c2baedebcd5a93b82ed7d3ffd3e774bf008deff13cf9900bfe.scope: Deactivated successfully.
Nov 26 11:39:40 compute-0 podman[95107]: 2025-11-26 11:39:40.751624572 +0000 UTC m=+0.020141413 container died af4fb62797d490c2baedebcd5a93b82ed7d3ffd3e774bf008deff13cf9900bfe (image=quay.io/ceph/ceph:v18, name=modest_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 26 11:39:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc7c781b59db4928df979115b8e2b5c887c1f617ac69e8fbb481187f458cddc1-merged.mount: Deactivated successfully.
Nov 26 11:39:40 compute-0 podman[95107]: 2025-11-26 11:39:40.775227447 +0000 UTC m=+0.043744277 container remove af4fb62797d490c2baedebcd5a93b82ed7d3ffd3e774bf008deff13cf9900bfe (image=quay.io/ceph/ceph:v18, name=modest_cori, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:39:40 compute-0 systemd[1]: libpod-conmon-af4fb62797d490c2baedebcd5a93b82ed7d3ffd3e774bf008deff13cf9900bfe.scope: Deactivated successfully.
Nov 26 11:39:40 compute-0 sudo[94998]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:40 compute-0 sudo[95160]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akjuckcghspmhifazbjwnypaccwraekj ; /usr/bin/python3'
Nov 26 11:39:40 compute-0 sudo[95160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:39:40 compute-0 dazzling_gagarin[95077]: {
Nov 26 11:39:40 compute-0 dazzling_gagarin[95077]:     "2627095b-eef8-4027-bfef-68bf7cb6801f": {
Nov 26 11:39:40 compute-0 dazzling_gagarin[95077]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:39:40 compute-0 dazzling_gagarin[95077]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 11:39:40 compute-0 dazzling_gagarin[95077]:         "osd_id": 1,
Nov 26 11:39:40 compute-0 dazzling_gagarin[95077]:         "osd_uuid": "2627095b-eef8-4027-bfef-68bf7cb6801f",
Nov 26 11:39:40 compute-0 dazzling_gagarin[95077]:         "type": "bluestore"
Nov 26 11:39:40 compute-0 dazzling_gagarin[95077]:     },
Nov 26 11:39:40 compute-0 dazzling_gagarin[95077]:     "a9ad59a0-aa2e-4d92-b571-519d2d145b6a": {
Nov 26 11:39:40 compute-0 dazzling_gagarin[95077]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:39:40 compute-0 dazzling_gagarin[95077]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 11:39:40 compute-0 dazzling_gagarin[95077]:         "osd_id": 0,
Nov 26 11:39:40 compute-0 dazzling_gagarin[95077]:         "osd_uuid": "a9ad59a0-aa2e-4d92-b571-519d2d145b6a",
Nov 26 11:39:40 compute-0 dazzling_gagarin[95077]:         "type": "bluestore"
Nov 26 11:39:40 compute-0 dazzling_gagarin[95077]:     },
Nov 26 11:39:40 compute-0 dazzling_gagarin[95077]:     "d56156fb-7361-4bef-b06b-1320109b4323": {
Nov 26 11:39:40 compute-0 dazzling_gagarin[95077]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:39:40 compute-0 dazzling_gagarin[95077]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 11:39:40 compute-0 dazzling_gagarin[95077]:         "osd_id": 2,
Nov 26 11:39:40 compute-0 dazzling_gagarin[95077]:         "osd_uuid": "d56156fb-7361-4bef-b06b-1320109b4323",
Nov 26 11:39:40 compute-0 dazzling_gagarin[95077]:         "type": "bluestore"
Nov 26 11:39:40 compute-0 dazzling_gagarin[95077]:     }
Nov 26 11:39:40 compute-0 dazzling_gagarin[95077]: }
Nov 26 11:39:40 compute-0 systemd[1]: libpod-886723c08e3b4238994f0a2206d3b33040fd716e3a841d607e3fc688d264f9e3.scope: Deactivated successfully.
Nov 26 11:39:40 compute-0 podman[95064]: 2025-11-26 11:39:40.932310248 +0000 UTC m=+0.857315487 container died 886723c08e3b4238994f0a2206d3b33040fd716e3a841d607e3fc688d264f9e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_gagarin, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:39:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-75737d470230048fa71fc3e7f048c4570555dcc63cc7408434b70b9d0335bb37-merged.mount: Deactivated successfully.
Nov 26 11:39:40 compute-0 podman[95064]: 2025-11-26 11:39:40.96216314 +0000 UTC m=+0.887168368 container remove 886723c08e3b4238994f0a2206d3b33040fd716e3a841d607e3fc688d264f9e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_gagarin, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:39:40 compute-0 systemd[1]: libpod-conmon-886723c08e3b4238994f0a2206d3b33040fd716e3a841d607e3fc688d264f9e3.scope: Deactivated successfully.
Nov 26 11:39:40 compute-0 sudo[94919]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:40 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 11:39:40 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:40 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 11:39:40 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:40 compute-0 python3[95165]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:39:41 compute-0 sudo[95182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:39:41 compute-0 sudo[95182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:41 compute-0 podman[95184]: 2025-11-26 11:39:41.03284625 +0000 UTC m=+0.027193342 container create 118fa8f279fb28ea550b2e77f48d9f00af8eb6a46f99c426e72ffa54c160cf49 (image=quay.io/ceph/ceph:v18, name=priceless_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 26 11:39:41 compute-0 sudo[95182]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:41 compute-0 systemd[1]: Started libpod-conmon-118fa8f279fb28ea550b2e77f48d9f00af8eb6a46f99c426e72ffa54c160cf49.scope.
Nov 26 11:39:41 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:39:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/017ac4196359dbb4929b01757fef7ec2e67a055608474cda882dc2f6da9706bc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/017ac4196359dbb4929b01757fef7ec2e67a055608474cda882dc2f6da9706bc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:41 compute-0 podman[95184]: 2025-11-26 11:39:41.081702269 +0000 UTC m=+0.076049381 container init 118fa8f279fb28ea550b2e77f48d9f00af8eb6a46f99c426e72ffa54c160cf49 (image=quay.io/ceph/ceph:v18, name=priceless_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 26 11:39:41 compute-0 sudo[95219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 26 11:39:41 compute-0 sudo[95219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:41 compute-0 podman[95184]: 2025-11-26 11:39:41.085543506 +0000 UTC m=+0.079890598 container start 118fa8f279fb28ea550b2e77f48d9f00af8eb6a46f99c426e72ffa54c160cf49 (image=quay.io/ceph/ceph:v18, name=priceless_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 26 11:39:41 compute-0 podman[95184]: 2025-11-26 11:39:41.086481639 +0000 UTC m=+0.080828730 container attach 118fa8f279fb28ea550b2e77f48d9f00af8eb6a46f99c426e72ffa54c160cf49 (image=quay.io/ceph/ceph:v18, name=priceless_beaver, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:39:41 compute-0 sudo[95219]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:41 compute-0 podman[95184]: 2025-11-26 11:39:41.021517371 +0000 UTC m=+0.015864473 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:39:41 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v41: 5 pgs: 2 creating+peering, 1 active+clean, 2 unknown; 449 KiB data, 79 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:39:41 compute-0 ceph-mgr[75197]: [balancer INFO root] Optimize plan auto_2025-11-26_11:39:41
Nov 26 11:39:41 compute-0 ceph-mgr[75197]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 11:39:41 compute-0 ceph-mgr[75197]: [balancer INFO root] Some PGs (0.400000) are unknown; try again later
Nov 26 11:39:41 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 11:39:41 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:39:41 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 11:39:41 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:39:41 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 26 11:39:41 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:39:41 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 26 11:39:41 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:39:41 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 26 11:39:41 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:39:41 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 26 11:39:41 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0) v1
Nov 26 11:39:41 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 11:39:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:39:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:39:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 11:39:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 11:39:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:39:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:39:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:39:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:39:41 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 20 pg[5.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [2] r=0 lpr=20 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:41 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e20 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:39:41 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 26 11:39:41 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3929274192' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 26 11:39:41 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Nov 26 11:39:41 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Nov 26 11:39:41 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3929274192' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 26 11:39:41 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e21 e21: 3 total, 3 up, 3 in
Nov 26 11:39:41 compute-0 priceless_beaver[95227]: pool 'cephfs.cephfs.meta' created
Nov 26 11:39:41 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 3 up, 3 in
Nov 26 11:39:41 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/640288940' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 26 11:39:41 compute-0 ceph-mon[74928]: osdmap e20: 3 total, 3 up, 3 in
Nov 26 11:39:41 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:41 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:41 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 11:39:41 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3929274192' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 26 11:39:41 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 21 pg[5.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [2] r=0 lpr=20 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:41 compute-0 ceph-mgr[75197]: [progress INFO root] update: starting ev 48230f18-fcd2-44a7-9993-bdbf1ebfaaf6 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Nov 26 11:39:41 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0) v1
Nov 26 11:39:41 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 11:39:41 compute-0 systemd[1]: libpod-118fa8f279fb28ea550b2e77f48d9f00af8eb6a46f99c426e72ffa54c160cf49.scope: Deactivated successfully.
Nov 26 11:39:41 compute-0 podman[95184]: 2025-11-26 11:39:41.724913638 +0000 UTC m=+0.719260731 container died 118fa8f279fb28ea550b2e77f48d9f00af8eb6a46f99c426e72ffa54c160cf49 (image=quay.io/ceph/ceph:v18, name=priceless_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:39:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-017ac4196359dbb4929b01757fef7ec2e67a055608474cda882dc2f6da9706bc-merged.mount: Deactivated successfully.
Nov 26 11:39:41 compute-0 podman[95184]: 2025-11-26 11:39:41.752540327 +0000 UTC m=+0.746887420 container remove 118fa8f279fb28ea550b2e77f48d9f00af8eb6a46f99c426e72ffa54c160cf49 (image=quay.io/ceph/ceph:v18, name=priceless_beaver, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:39:41 compute-0 systemd[1]: libpod-conmon-118fa8f279fb28ea550b2e77f48d9f00af8eb6a46f99c426e72ffa54c160cf49.scope: Deactivated successfully.
Nov 26 11:39:41 compute-0 sudo[95160]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:41 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 21 pg[6.0( empty local-lis/les=0/0 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [0] r=0 lpr=21 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:41 compute-0 sudo[95304]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfryeyujmzjwfmhoitozpwsuodsfagix ; /usr/bin/python3'
Nov 26 11:39:41 compute-0 sudo[95304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:39:41 compute-0 python3[95306]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:39:41 compute-0 podman[95307]: 2025-11-26 11:39:41.997913872 +0000 UTC m=+0.025536898 container create 1b7b9de579215ef1c5826f1e0ede9fe20c49f2b09260bdcd23f804ac7858b348 (image=quay.io/ceph/ceph:v18, name=suspicious_lederberg, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 26 11:39:42 compute-0 systemd[1]: Started libpod-conmon-1b7b9de579215ef1c5826f1e0ede9fe20c49f2b09260bdcd23f804ac7858b348.scope.
Nov 26 11:39:42 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:39:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/721cbb5c5518a167866e641799a209f693b7f7125cb841dbecfe5c229bc1c070/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/721cbb5c5518a167866e641799a209f693b7f7125cb841dbecfe5c229bc1c070/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:42 compute-0 podman[95307]: 2025-11-26 11:39:42.055089402 +0000 UTC m=+0.082712429 container init 1b7b9de579215ef1c5826f1e0ede9fe20c49f2b09260bdcd23f804ac7858b348 (image=quay.io/ceph/ceph:v18, name=suspicious_lederberg, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 26 11:39:42 compute-0 podman[95307]: 2025-11-26 11:39:42.059291458 +0000 UTC m=+0.086914485 container start 1b7b9de579215ef1c5826f1e0ede9fe20c49f2b09260bdcd23f804ac7858b348 (image=quay.io/ceph/ceph:v18, name=suspicious_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 26 11:39:42 compute-0 podman[95307]: 2025-11-26 11:39:42.060391639 +0000 UTC m=+0.088014667 container attach 1b7b9de579215ef1c5826f1e0ede9fe20c49f2b09260bdcd23f804ac7858b348 (image=quay.io/ceph/ceph:v18, name=suspicious_lederberg, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:39:42 compute-0 podman[95307]: 2025-11-26 11:39:41.987467628 +0000 UTC m=+0.015090676 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:39:42 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 26 11:39:42 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/538826250' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 26 11:39:42 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Nov 26 11:39:42 compute-0 ceph-mon[74928]: pgmap v41: 5 pgs: 2 creating+peering, 1 active+clean, 2 unknown; 449 KiB data, 79 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:39:42 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Nov 26 11:39:42 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3929274192' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 26 11:39:42 compute-0 ceph-mon[74928]: osdmap e21: 3 total, 3 up, 3 in
Nov 26 11:39:42 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 11:39:42 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/538826250' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 26 11:39:42 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Nov 26 11:39:42 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/538826250' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 26 11:39:42 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e22 e22: 3 total, 3 up, 3 in
Nov 26 11:39:42 compute-0 suspicious_lederberg[95320]: pool 'cephfs.cephfs.data' created
Nov 26 11:39:42 compute-0 ceph-mgr[75197]: [progress INFO root] update: starting ev 0df45e23-e806-41bf-a626-7981f3755d1c (PG autoscaler increasing pool 3 PGs from 1 to 32)
Nov 26 11:39:42 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 3 up, 3 in
Nov 26 11:39:42 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0) v1
Nov 26 11:39:42 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 11:39:42 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 22 pg[7.0( empty local-lis/les=0/0 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [1] r=0 lpr=22 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:42 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 22 pg[6.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [0] r=0 lpr=21 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:42 compute-0 systemd[1]: libpod-1b7b9de579215ef1c5826f1e0ede9fe20c49f2b09260bdcd23f804ac7858b348.scope: Deactivated successfully.
Nov 26 11:39:42 compute-0 podman[95307]: 2025-11-26 11:39:42.733575117 +0000 UTC m=+0.761198144 container died 1b7b9de579215ef1c5826f1e0ede9fe20c49f2b09260bdcd23f804ac7858b348 (image=quay.io/ceph/ceph:v18, name=suspicious_lederberg, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:39:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-721cbb5c5518a167866e641799a209f693b7f7125cb841dbecfe5c229bc1c070-merged.mount: Deactivated successfully.
Nov 26 11:39:42 compute-0 podman[95307]: 2025-11-26 11:39:42.756380196 +0000 UTC m=+0.784003223 container remove 1b7b9de579215ef1c5826f1e0ede9fe20c49f2b09260bdcd23f804ac7858b348 (image=quay.io/ceph/ceph:v18, name=suspicious_lederberg, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:39:42 compute-0 systemd[1]: libpod-conmon-1b7b9de579215ef1c5826f1e0ede9fe20c49f2b09260bdcd23f804ac7858b348.scope: Deactivated successfully.
Nov 26 11:39:42 compute-0 sudo[95304]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:42 compute-0 sudo[95379]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eycvzyxlaqswwcmudacycuaoegjodojc ; /usr/bin/python3'
Nov 26 11:39:42 compute-0 sudo[95379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:39:42 compute-0 python3[95381]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:39:43 compute-0 podman[95382]: 2025-11-26 11:39:43.026109763 +0000 UTC m=+0.025648712 container create 094697861b6eba419ea7b3a6bd1d1c4933ec07b217d430148a2530adc37e092b (image=quay.io/ceph/ceph:v18, name=fervent_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:39:43 compute-0 systemd[1]: Started libpod-conmon-094697861b6eba419ea7b3a6bd1d1c4933ec07b217d430148a2530adc37e092b.scope.
Nov 26 11:39:43 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:39:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b9df37d3d17029f469ce7634e75a2634cedb142f5adbf7fea5054a9e88d31f6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b9df37d3d17029f469ce7634e75a2634cedb142f5adbf7fea5054a9e88d31f6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:43 compute-0 podman[95382]: 2025-11-26 11:39:43.073412967 +0000 UTC m=+0.072951936 container init 094697861b6eba419ea7b3a6bd1d1c4933ec07b217d430148a2530adc37e092b (image=quay.io/ceph/ceph:v18, name=fervent_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 26 11:39:43 compute-0 podman[95382]: 2025-11-26 11:39:43.077052288 +0000 UTC m=+0.076591238 container start 094697861b6eba419ea7b3a6bd1d1c4933ec07b217d430148a2530adc37e092b (image=quay.io/ceph/ceph:v18, name=fervent_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 26 11:39:43 compute-0 podman[95382]: 2025-11-26 11:39:43.078010027 +0000 UTC m=+0.077548976 container attach 094697861b6eba419ea7b3a6bd1d1c4933ec07b217d430148a2530adc37e092b (image=quay.io/ceph/ceph:v18, name=fervent_greider, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:39:43 compute-0 podman[95382]: 2025-11-26 11:39:43.015301128 +0000 UTC m=+0.014840097 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:39:43 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 26 11:39:43 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v44: 7 pgs: 2 creating+peering, 3 active+clean, 2 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:39:43 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 11:39:43 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0) v1
Nov 26 11:39:43 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/299888636' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Nov 26 11:39:43 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Nov 26 11:39:43 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Nov 26 11:39:43 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/538826250' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 26 11:39:43 compute-0 ceph-mon[74928]: osdmap e22: 3 total, 3 up, 3 in
Nov 26 11:39:43 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 11:39:43 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 11:39:43 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/299888636' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Nov 26 11:39:43 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Nov 26 11:39:43 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 11:39:43 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/299888636' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Nov 26 11:39:43 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e23 e23: 3 total, 3 up, 3 in
Nov 26 11:39:43 compute-0 fervent_greider[95394]: enabled application 'rbd' on pool 'vms'
Nov 26 11:39:43 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 3 up, 3 in
Nov 26 11:39:43 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 23 pg[2.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=23 pruub=10.972815514s) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active pruub 26.387990952s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:43 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 23 pg[2.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=23 pruub=10.972815514s) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown pruub 26.387990952s@ mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:43 compute-0 ceph-mgr[75197]: [progress INFO root] update: starting ev 2ef50375-4ccd-4672-ae19-10bd8c9594c4 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Nov 26 11:39:43 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0) v1
Nov 26 11:39:43 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 11:39:43 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 23 pg[7.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [1] r=0 lpr=22 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:43 compute-0 systemd[1]: libpod-094697861b6eba419ea7b3a6bd1d1c4933ec07b217d430148a2530adc37e092b.scope: Deactivated successfully.
Nov 26 11:39:43 compute-0 podman[95382]: 2025-11-26 11:39:43.741706948 +0000 UTC m=+0.741245917 container died 094697861b6eba419ea7b3a6bd1d1c4933ec07b217d430148a2530adc37e092b (image=quay.io/ceph/ceph:v18, name=fervent_greider, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:39:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b9df37d3d17029f469ce7634e75a2634cedb142f5adbf7fea5054a9e88d31f6-merged.mount: Deactivated successfully.
Nov 26 11:39:43 compute-0 podman[95382]: 2025-11-26 11:39:43.765599123 +0000 UTC m=+0.765138073 container remove 094697861b6eba419ea7b3a6bd1d1c4933ec07b217d430148a2530adc37e092b (image=quay.io/ceph/ceph:v18, name=fervent_greider, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 26 11:39:43 compute-0 systemd[1]: libpod-conmon-094697861b6eba419ea7b3a6bd1d1c4933ec07b217d430148a2530adc37e092b.scope: Deactivated successfully.
Nov 26 11:39:43 compute-0 sudo[95379]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:43 compute-0 sudo[95452]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tinrqacbemgkmfqaqhbivcndveajhugk ; /usr/bin/python3'
Nov 26 11:39:43 compute-0 sudo[95452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:39:43 compute-0 python3[95454]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:39:44 compute-0 podman[95455]: 2025-11-26 11:39:44.014514834 +0000 UTC m=+0.027220704 container create ef4a55667074917f20462529dc57a130d046139019f9cc3413daf42369e094fd (image=quay.io/ceph/ceph:v18, name=determined_driscoll, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 26 11:39:44 compute-0 systemd[1]: Started libpod-conmon-ef4a55667074917f20462529dc57a130d046139019f9cc3413daf42369e094fd.scope.
Nov 26 11:39:44 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:39:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46b3e40c40558653162ea6b92c972231be227af942d7a22e8e303e5ec9ede118/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46b3e40c40558653162ea6b92c972231be227af942d7a22e8e303e5ec9ede118/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:44 compute-0 podman[95455]: 2025-11-26 11:39:44.066899152 +0000 UTC m=+0.079605023 container init ef4a55667074917f20462529dc57a130d046139019f9cc3413daf42369e094fd (image=quay.io/ceph/ceph:v18, name=determined_driscoll, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:39:44 compute-0 podman[95455]: 2025-11-26 11:39:44.071228451 +0000 UTC m=+0.083934322 container start ef4a55667074917f20462529dc57a130d046139019f9cc3413daf42369e094fd (image=quay.io/ceph/ceph:v18, name=determined_driscoll, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 26 11:39:44 compute-0 podman[95455]: 2025-11-26 11:39:44.0724006 +0000 UTC m=+0.085106470 container attach ef4a55667074917f20462529dc57a130d046139019f9cc3413daf42369e094fd (image=quay.io/ceph/ceph:v18, name=determined_driscoll, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 26 11:39:44 compute-0 podman[95455]: 2025-11-26 11:39:44.00302156 +0000 UTC m=+0.015727451 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:39:44 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0) v1
Nov 26 11:39:44 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2789707177' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Nov 26 11:39:44 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Nov 26 11:39:44 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Nov 26 11:39:44 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2789707177' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Nov 26 11:39:44 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e24 e24: 3 total, 3 up, 3 in
Nov 26 11:39:44 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 3 up, 3 in
Nov 26 11:39:44 compute-0 determined_driscoll[95467]: enabled application 'rbd' on pool 'volumes'
Nov 26 11:39:44 compute-0 ceph-mgr[75197]: [progress INFO root] update: starting ev 9f770903-6ed5-40d6-8f53-1a3c08f4d1ba (PG autoscaler increasing pool 5 PGs from 1 to 32)
Nov 26 11:39:44 compute-0 ceph-mon[74928]: pgmap v44: 7 pgs: 2 creating+peering, 3 active+clean, 2 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:39:44 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Nov 26 11:39:44 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 11:39:44 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/299888636' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Nov 26 11:39:44 compute-0 ceph-mon[74928]: osdmap e23: 3 total, 3 up, 3 in
Nov 26 11:39:44 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 11:39:44 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/2789707177' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Nov 26 11:39:44 compute-0 ceph-mgr[75197]: [progress INFO root] complete: finished ev 48230f18-fcd2-44a7-9993-bdbf1ebfaaf6 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Nov 26 11:39:44 compute-0 ceph-mgr[75197]: [progress INFO root] Completed event 48230f18-fcd2-44a7-9993-bdbf1ebfaaf6 (PG autoscaler increasing pool 2 PGs from 1 to 32) in 3 seconds
Nov 26 11:39:44 compute-0 ceph-mgr[75197]: [progress INFO root] complete: finished ev 0df45e23-e806-41bf-a626-7981f3755d1c (PG autoscaler increasing pool 3 PGs from 1 to 32)
Nov 26 11:39:44 compute-0 ceph-mgr[75197]: [progress INFO root] Completed event 0df45e23-e806-41bf-a626-7981f3755d1c (PG autoscaler increasing pool 3 PGs from 1 to 32) in 2 seconds
Nov 26 11:39:44 compute-0 ceph-mgr[75197]: [progress INFO root] complete: finished ev 2ef50375-4ccd-4672-ae19-10bd8c9594c4 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Nov 26 11:39:44 compute-0 ceph-mgr[75197]: [progress INFO root] Completed event 2ef50375-4ccd-4672-ae19-10bd8c9594c4 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 1 seconds
Nov 26 11:39:44 compute-0 ceph-mgr[75197]: [progress INFO root] complete: finished ev 9f770903-6ed5-40d6-8f53-1a3c08f4d1ba (PG autoscaler increasing pool 5 PGs from 1 to 32)
Nov 26 11:39:44 compute-0 ceph-mgr[75197]: [progress INFO root] Completed event 9f770903-6ed5-40d6-8f53-1a3c08f4d1ba (PG autoscaler increasing pool 5 PGs from 1 to 32) in 0 seconds
Nov 26 11:39:44 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 24 pg[2.1f( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:44 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 24 pg[2.1c( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:44 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 24 pg[2.1e( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:44 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 24 pg[2.b( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:44 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 24 pg[2.a( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:44 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 24 pg[2.8( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:44 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 24 pg[2.6( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:44 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 24 pg[2.5( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:44 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 24 pg[2.4( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:44 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 24 pg[2.3( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:44 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 24 pg[2.2( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:44 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 24 pg[2.1( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:44 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 24 pg[2.7( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:44 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 24 pg[2.c( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:44 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 24 pg[2.1d( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:44 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 24 pg[2.d( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:44 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 24 pg[2.e( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:44 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 24 pg[2.f( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:44 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 24 pg[2.10( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:44 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 24 pg[2.11( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:44 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 24 pg[2.12( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:44 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 24 pg[2.13( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:44 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 24 pg[2.14( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:44 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 24 pg[2.15( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:44 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 24 pg[2.16( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:44 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 24 pg[2.17( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:44 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 24 pg[2.18( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:44 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 24 pg[2.19( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:44 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 24 pg[2.1a( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:44 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 24 pg[2.1b( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:44 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 24 pg[2.1f( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:44 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 24 pg[2.9( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:44 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 24 pg[2.1c( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:44 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 24 pg[2.a( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:44 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 24 pg[2.1e( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:44 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 24 pg[2.6( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:44 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 24 pg[2.b( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:44 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 24 pg[2.5( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:44 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 24 pg[2.4( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:44 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 24 pg[2.3( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:44 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 24 pg[2.2( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:44 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 24 pg[2.1( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:44 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 24 pg[2.1d( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:44 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 24 pg[2.d( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:44 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 24 pg[2.e( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:44 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 24 pg[2.8( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:44 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 24 pg[2.7( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:44 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 24 pg[2.c( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:44 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 24 pg[2.f( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:44 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 24 pg[2.10( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:44 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 24 pg[2.11( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:44 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 24 pg[2.12( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:44 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 24 pg[2.14( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:44 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 24 pg[2.16( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:44 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 24 pg[2.17( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:44 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 24 pg[2.18( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:44 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 24 pg[2.13( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:44 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 24 pg[2.0( empty local-lis/les=23/24 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:44 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 24 pg[2.1a( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:44 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 24 pg[2.1b( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:44 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 24 pg[2.9( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:44 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 24 pg[2.19( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:44 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 24 pg[2.15( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:44 compute-0 systemd[1]: libpod-ef4a55667074917f20462529dc57a130d046139019f9cc3413daf42369e094fd.scope: Deactivated successfully.
Nov 26 11:39:44 compute-0 podman[95455]: 2025-11-26 11:39:44.745874753 +0000 UTC m=+0.758580623 container died ef4a55667074917f20462529dc57a130d046139019f9cc3413daf42369e094fd (image=quay.io/ceph/ceph:v18, name=determined_driscoll, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 26 11:39:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-46b3e40c40558653162ea6b92c972231be227af942d7a22e8e303e5ec9ede118-merged.mount: Deactivated successfully.
Nov 26 11:39:44 compute-0 podman[95455]: 2025-11-26 11:39:44.766278736 +0000 UTC m=+0.778984607 container remove ef4a55667074917f20462529dc57a130d046139019f9cc3413daf42369e094fd (image=quay.io/ceph/ceph:v18, name=determined_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:39:44 compute-0 systemd[1]: libpod-conmon-ef4a55667074917f20462529dc57a130d046139019f9cc3413daf42369e094fd.scope: Deactivated successfully.
Nov 26 11:39:44 compute-0 sudo[95452]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:44 compute-0 sudo[95525]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyufbfkhrjpeeseniumrqpznigwqevie ; /usr/bin/python3'
Nov 26 11:39:44 compute-0 sudo[95525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:39:45 compute-0 python3[95527]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:39:45 compute-0 podman[95528]: 2025-11-26 11:39:45.050419378 +0000 UTC m=+0.027830338 container create 1469bcc1969bb176f093c59fb5fefc76ca0d3077517125a4593a41e50fcff74d (image=quay.io/ceph/ceph:v18, name=determined_margulis, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 26 11:39:45 compute-0 systemd[1]: Started libpod-conmon-1469bcc1969bb176f093c59fb5fefc76ca0d3077517125a4593a41e50fcff74d.scope.
Nov 26 11:39:45 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:39:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2eb495314c91132ff984a144429d7c26feb1352e792a7a08d0119b55aea3f42a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2eb495314c91132ff984a144429d7c26feb1352e792a7a08d0119b55aea3f42a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:45 compute-0 podman[95528]: 2025-11-26 11:39:45.10669562 +0000 UTC m=+0.084106582 container init 1469bcc1969bb176f093c59fb5fefc76ca0d3077517125a4593a41e50fcff74d (image=quay.io/ceph/ceph:v18, name=determined_margulis, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:39:45 compute-0 podman[95528]: 2025-11-26 11:39:45.110911363 +0000 UTC m=+0.088322323 container start 1469bcc1969bb176f093c59fb5fefc76ca0d3077517125a4593a41e50fcff74d (image=quay.io/ceph/ceph:v18, name=determined_margulis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 26 11:39:45 compute-0 podman[95528]: 2025-11-26 11:39:45.11217143 +0000 UTC m=+0.089582391 container attach 1469bcc1969bb176f093c59fb5fefc76ca0d3077517125a4593a41e50fcff74d (image=quay.io/ceph/ceph:v18, name=determined_margulis, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:39:45 compute-0 podman[95528]: 2025-11-26 11:39:45.038570175 +0000 UTC m=+0.015981146 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:39:45 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v47: 38 pgs: 1 creating+peering, 5 active+clean, 32 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:39:45 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 26 11:39:45 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 11:39:45 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 26 11:39:45 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 11:39:45 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0) v1
Nov 26 11:39:45 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1594463623' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Nov 26 11:39:45 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Nov 26 11:39:45 compute-0 ceph-mon[74928]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 26 11:39:45 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 11:39:45 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 11:39:45 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1594463623' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Nov 26 11:39:45 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Nov 26 11:39:45 compute-0 determined_margulis[95541]: enabled application 'rbd' on pool 'backups'
Nov 26 11:39:45 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Nov 26 11:39:45 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Nov 26 11:39:45 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/2789707177' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Nov 26 11:39:45 compute-0 ceph-mon[74928]: osdmap e24: 3 total, 3 up, 3 in
Nov 26 11:39:45 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 11:39:45 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 11:39:45 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1594463623' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Nov 26 11:39:45 compute-0 systemd[1]: libpod-1469bcc1969bb176f093c59fb5fefc76ca0d3077517125a4593a41e50fcff74d.scope: Deactivated successfully.
Nov 26 11:39:45 compute-0 conmon[95541]: conmon 1469bcc1969bb176f093 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1469bcc1969bb176f093c59fb5fefc76ca0d3077517125a4593a41e50fcff74d.scope/container/memory.events
Nov 26 11:39:45 compute-0 podman[95566]: 2025-11-26 11:39:45.795926921 +0000 UTC m=+0.016523473 container died 1469bcc1969bb176f093c59fb5fefc76ca0d3077517125a4593a41e50fcff74d (image=quay.io/ceph/ceph:v18, name=determined_margulis, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 26 11:39:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-2eb495314c91132ff984a144429d7c26feb1352e792a7a08d0119b55aea3f42a-merged.mount: Deactivated successfully.
Nov 26 11:39:45 compute-0 podman[95566]: 2025-11-26 11:39:45.818231985 +0000 UTC m=+0.038828517 container remove 1469bcc1969bb176f093c59fb5fefc76ca0d3077517125a4593a41e50fcff74d (image=quay.io/ceph/ceph:v18, name=determined_margulis, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:39:45 compute-0 systemd[1]: libpod-conmon-1469bcc1969bb176f093c59fb5fefc76ca0d3077517125a4593a41e50fcff74d.scope: Deactivated successfully.
Nov 26 11:39:45 compute-0 sudo[95525]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:45 compute-0 sudo[95601]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vptdslsleywirrnqywnqebftigsnuapm ; /usr/bin/python3'
Nov 26 11:39:45 compute-0 sudo[95601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:39:46 compute-0 python3[95603]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:39:46 compute-0 podman[95604]: 2025-11-26 11:39:46.085545907 +0000 UTC m=+0.027497183 container create ee897e0417d04c8a0696ca9466d6e3adbcecb7b0fa3d1980de1a9f2a24ecba2a (image=quay.io/ceph/ceph:v18, name=funny_spence, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 26 11:39:46 compute-0 systemd[1]: Started libpod-conmon-ee897e0417d04c8a0696ca9466d6e3adbcecb7b0fa3d1980de1a9f2a24ecba2a.scope.
Nov 26 11:39:46 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:39:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c36957fee06b1748a1a57ac8f5139f0b2e5509e13062309f04588c8c83fc7f0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c36957fee06b1748a1a57ac8f5139f0b2e5509e13062309f04588c8c83fc7f0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:46 compute-0 podman[95604]: 2025-11-26 11:39:46.140704756 +0000 UTC m=+0.082656041 container init ee897e0417d04c8a0696ca9466d6e3adbcecb7b0fa3d1980de1a9f2a24ecba2a (image=quay.io/ceph/ceph:v18, name=funny_spence, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:39:46 compute-0 podman[95604]: 2025-11-26 11:39:46.14570181 +0000 UTC m=+0.087653086 container start ee897e0417d04c8a0696ca9466d6e3adbcecb7b0fa3d1980de1a9f2a24ecba2a (image=quay.io/ceph/ceph:v18, name=funny_spence, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:39:46 compute-0 podman[95604]: 2025-11-26 11:39:46.146935407 +0000 UTC m=+0.088886673 container attach ee897e0417d04c8a0696ca9466d6e3adbcecb7b0fa3d1980de1a9f2a24ecba2a (image=quay.io/ceph/ceph:v18, name=funny_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 26 11:39:46 compute-0 podman[95604]: 2025-11-26 11:39:46.073791265 +0000 UTC m=+0.015742551 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:39:46 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 25 pg[3.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=25 pruub=9.506929398s) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active pruub 31.219617844s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:46 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 25 pg[3.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=25 pruub=9.506929398s) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown pruub 31.219617844s@ mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:46 compute-0 ceph-mgr[75197]: [progress INFO root] Writing back 7 completed events
Nov 26 11:39:46 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 26 11:39:46 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:46 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e25 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:39:46 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0) v1
Nov 26 11:39:46 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3223601471' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Nov 26 11:39:46 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Nov 26 11:39:46 compute-0 ceph-mon[74928]: pgmap v47: 38 pgs: 1 creating+peering, 5 active+clean, 32 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:39:46 compute-0 ceph-mon[74928]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 26 11:39:46 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 11:39:46 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 11:39:46 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1594463623' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Nov 26 11:39:46 compute-0 ceph-mon[74928]: osdmap e25: 3 total, 3 up, 3 in
Nov 26 11:39:46 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:46 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3223601471' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Nov 26 11:39:46 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3223601471' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Nov 26 11:39:46 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Nov 26 11:39:46 compute-0 funny_spence[95617]: enabled application 'rbd' on pool 'images'
Nov 26 11:39:46 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Nov 26 11:39:46 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 26 pg[3.1e( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:46 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 26 pg[3.1d( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:46 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 26 pg[3.1c( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:46 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 26 pg[3.1b( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:46 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 26 pg[3.1f( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:46 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 26 pg[3.a( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:46 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 26 pg[3.9( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:46 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 26 pg[3.8( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:46 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 26 pg[3.7( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:46 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 26 pg[3.6( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:46 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 26 pg[3.5( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:46 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 26 pg[3.3( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:46 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 26 pg[3.1( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:46 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 26 pg[3.4( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:46 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 26 pg[3.2( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:46 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 26 pg[3.b( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:46 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 26 pg[3.c( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:46 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 26 pg[3.d( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:46 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 26 pg[3.e( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:46 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 26 pg[3.f( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:46 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 26 pg[3.10( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:46 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 26 pg[3.12( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:46 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 26 pg[3.13( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:46 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 26 pg[3.14( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:46 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 26 pg[3.15( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:46 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 26 pg[3.16( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:46 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 26 pg[3.17( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:46 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 26 pg[3.18( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:46 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 26 pg[3.19( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:46 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 26 pg[3.1a( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:46 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 26 pg[3.11( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:46 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 26 pg[3.1d( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:46 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 26 pg[3.1e( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:46 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 26 pg[3.1b( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:46 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 26 pg[3.1f( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:46 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 26 pg[3.9( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:46 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 26 pg[3.a( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:46 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 26 pg[3.8( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:46 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 26 pg[3.6( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:46 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 26 pg[3.7( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:46 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 26 pg[3.3( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:46 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 26 pg[3.1( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:46 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 26 pg[3.5( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:46 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 26 pg[3.1c( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:46 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 26 pg[3.0( empty local-lis/les=25/26 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:46 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 26 pg[3.2( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:46 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 26 pg[3.b( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:46 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 26 pg[3.c( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:46 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 26 pg[3.d( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:46 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 26 pg[3.e( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:46 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 26 pg[3.f( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:46 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 26 pg[3.4( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:46 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 26 pg[3.12( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:46 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 26 pg[3.13( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:46 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 26 pg[3.10( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:46 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 26 pg[3.14( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:46 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 26 pg[3.15( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:46 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 26 pg[3.17( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:46 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 26 pg[3.16( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:46 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 26 pg[3.18( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:46 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 26 pg[3.19( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:46 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 26 pg[3.11( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:46 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 26 pg[3.1a( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:46 compute-0 systemd[1]: libpod-ee897e0417d04c8a0696ca9466d6e3adbcecb7b0fa3d1980de1a9f2a24ecba2a.scope: Deactivated successfully.
Nov 26 11:39:46 compute-0 podman[95604]: 2025-11-26 11:39:46.778896577 +0000 UTC m=+0.720847863 container died ee897e0417d04c8a0696ca9466d6e3adbcecb7b0fa3d1980de1a9f2a24ecba2a (image=quay.io/ceph/ceph:v18, name=funny_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:39:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c36957fee06b1748a1a57ac8f5139f0b2e5509e13062309f04588c8c83fc7f0-merged.mount: Deactivated successfully.
Nov 26 11:39:46 compute-0 podman[95604]: 2025-11-26 11:39:46.800266565 +0000 UTC m=+0.742217841 container remove ee897e0417d04c8a0696ca9466d6e3adbcecb7b0fa3d1980de1a9f2a24ecba2a (image=quay.io/ceph/ceph:v18, name=funny_spence, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 26 11:39:46 compute-0 systemd[1]: libpod-conmon-ee897e0417d04c8a0696ca9466d6e3adbcecb7b0fa3d1980de1a9f2a24ecba2a.scope: Deactivated successfully.
Nov 26 11:39:46 compute-0 sudo[95601]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:46 compute-0 sudo[95675]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmpqfzirastxmhzvekbpknydkbiqgfcq ; /usr/bin/python3'
Nov 26 11:39:46 compute-0 sudo[95675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:39:46 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 25 pg[5.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=25 pruub=10.802595139s) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active pruub 29.400737762s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:46 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 26 pg[5.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=25 pruub=10.802595139s) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown pruub 29.400737762s@ mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:46 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 26 pg[5.18( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:46 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 26 pg[5.19( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:46 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 26 pg[5.1a( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:46 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 26 pg[5.1b( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:46 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 26 pg[5.1c( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:46 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 26 pg[5.1d( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:46 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 26 pg[5.1f( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:46 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 26 pg[5.1e( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:46 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 26 pg[5.8( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:46 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 26 pg[5.9( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:46 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 26 pg[5.14( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:46 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 26 pg[5.15( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:46 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 26 pg[5.16( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:46 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 26 pg[5.17( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:46 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 26 pg[5.c( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:46 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 26 pg[5.d( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:46 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 26 pg[5.e( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:46 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 26 pg[5.f( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:46 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 26 pg[5.13( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:46 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 26 pg[5.12( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:46 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 26 pg[5.10( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:46 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 26 pg[5.a( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:46 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 26 pg[5.b( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:46 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 26 pg[5.11( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:46 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 26 pg[5.1( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:46 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 26 pg[5.3( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:46 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 26 pg[5.6( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:46 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 26 pg[5.7( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:46 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 26 pg[5.4( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:46 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 26 pg[5.5( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:46 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 26 pg[5.2( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:47 compute-0 python3[95677]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:39:47 compute-0 podman[95678]: 2025-11-26 11:39:47.062584275 +0000 UTC m=+0.029195749 container create a3cd27ba49c22015d16bcca8174900d3324dce6fa9e0f276d8ba4cb8d36bebec (image=quay.io/ceph/ceph:v18, name=condescending_cohen, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:39:47 compute-0 systemd[1]: Started libpod-conmon-a3cd27ba49c22015d16bcca8174900d3324dce6fa9e0f276d8ba4cb8d36bebec.scope.
Nov 26 11:39:47 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:39:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15832155b2c8a13cc85d2f3568f8bf7cec759138b02e6b35ed2f71da58464d73/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15832155b2c8a13cc85d2f3568f8bf7cec759138b02e6b35ed2f71da58464d73/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:47 compute-0 podman[95678]: 2025-11-26 11:39:47.105164156 +0000 UTC m=+0.071775650 container init a3cd27ba49c22015d16bcca8174900d3324dce6fa9e0f276d8ba4cb8d36bebec (image=quay.io/ceph/ceph:v18, name=condescending_cohen, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 26 11:39:47 compute-0 podman[95678]: 2025-11-26 11:39:47.108819658 +0000 UTC m=+0.075431131 container start a3cd27ba49c22015d16bcca8174900d3324dce6fa9e0f276d8ba4cb8d36bebec (image=quay.io/ceph/ceph:v18, name=condescending_cohen, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:39:47 compute-0 podman[95678]: 2025-11-26 11:39:47.109834907 +0000 UTC m=+0.076446371 container attach a3cd27ba49c22015d16bcca8174900d3324dce6fa9e0f276d8ba4cb8d36bebec (image=quay.io/ceph/ceph:v18, name=condescending_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 26 11:39:47 compute-0 podman[95678]: 2025-11-26 11:39:47.051189368 +0000 UTC m=+0.017800852 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:39:47 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v50: 100 pgs: 7 active+clean, 93 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:39:47 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 26 11:39:47 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 11:39:47 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0) v1
Nov 26 11:39:47 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/816091612' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Nov 26 11:39:47 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Nov 26 11:39:47 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 11:39:47 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/816091612' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Nov 26 11:39:47 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Nov 26 11:39:47 compute-0 condescending_cohen[95690]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Nov 26 11:39:47 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3223601471' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Nov 26 11:39:47 compute-0 ceph-mon[74928]: osdmap e26: 3 total, 3 up, 3 in
Nov 26 11:39:47 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 11:39:47 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/816091612' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Nov 26 11:39:47 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Nov 26 11:39:47 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 27 pg[5.1d( empty local-lis/les=25/27 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:47 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 27 pg[5.1c( empty local-lis/les=25/27 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:47 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 27 pg[5.1f( empty local-lis/les=25/27 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:47 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 27 pg[5.1e( empty local-lis/les=25/27 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:47 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 27 pg[5.10( empty local-lis/les=25/27 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:47 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 27 pg[5.12( empty local-lis/les=25/27 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:47 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 27 pg[5.14( empty local-lis/les=25/27 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:47 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 27 pg[5.13( empty local-lis/les=25/27 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:47 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 27 pg[5.17( empty local-lis/les=25/27 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:47 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 27 pg[5.8( empty local-lis/les=25/27 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:47 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 27 pg[5.15( empty local-lis/les=25/27 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:47 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 27 pg[5.16( empty local-lis/les=25/27 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:47 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 27 pg[5.b( empty local-lis/les=25/27 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:47 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 27 pg[5.11( empty local-lis/les=25/27 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:47 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 27 pg[5.7( empty local-lis/les=25/27 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:47 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 27 pg[5.9( empty local-lis/les=25/27 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:47 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 27 pg[5.6( empty local-lis/les=25/27 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:47 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 27 pg[5.0( empty local-lis/les=25/27 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:47 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 27 pg[5.5( empty local-lis/les=25/27 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:47 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 27 pg[5.4( empty local-lis/les=25/27 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:47 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 27 pg[5.3( empty local-lis/les=25/27 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:47 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 27 pg[5.e( empty local-lis/les=25/27 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:47 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 27 pg[5.1( empty local-lis/les=25/27 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:47 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 27 pg[5.d( empty local-lis/les=25/27 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:47 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 27 pg[5.2( empty local-lis/les=25/27 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:47 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 27 pg[5.1a( empty local-lis/les=25/27 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:47 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 27 pg[5.19( empty local-lis/les=25/27 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:47 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 27 pg[5.1b( empty local-lis/les=25/27 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:47 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 27 pg[5.18( empty local-lis/les=25/27 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:47 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 27 pg[5.c( empty local-lis/les=25/27 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:47 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 27 pg[5.f( empty local-lis/les=25/27 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:47 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 27 pg[5.a( empty local-lis/les=25/27 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:47 compute-0 systemd[1]: libpod-a3cd27ba49c22015d16bcca8174900d3324dce6fa9e0f276d8ba4cb8d36bebec.scope: Deactivated successfully.
Nov 26 11:39:47 compute-0 conmon[95690]: conmon a3cd27ba49c22015d16b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a3cd27ba49c22015d16bcca8174900d3324dce6fa9e0f276d8ba4cb8d36bebec.scope/container/memory.events
Nov 26 11:39:47 compute-0 podman[95678]: 2025-11-26 11:39:47.790178692 +0000 UTC m=+0.756790167 container died a3cd27ba49c22015d16bcca8174900d3324dce6fa9e0f276d8ba4cb8d36bebec (image=quay.io/ceph/ceph:v18, name=condescending_cohen, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:39:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-15832155b2c8a13cc85d2f3568f8bf7cec759138b02e6b35ed2f71da58464d73-merged.mount: Deactivated successfully.
Nov 26 11:39:47 compute-0 podman[95678]: 2025-11-26 11:39:47.811287943 +0000 UTC m=+0.777899417 container remove a3cd27ba49c22015d16bcca8174900d3324dce6fa9e0f276d8ba4cb8d36bebec (image=quay.io/ceph/ceph:v18, name=condescending_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 26 11:39:47 compute-0 systemd[1]: libpod-conmon-a3cd27ba49c22015d16bcca8174900d3324dce6fa9e0f276d8ba4cb8d36bebec.scope: Deactivated successfully.
Nov 26 11:39:47 compute-0 sudo[95675]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:47 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 27 pg[4.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=27 pruub=8.815625191s) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active pruub 35.713508606s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:47 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 27 pg[4.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=27 pruub=8.815625191s) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown pruub 35.713508606s@ mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:47 compute-0 sudo[95748]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxcwcjhzugubvxismrmckktdqpyfjfnc ; /usr/bin/python3'
Nov 26 11:39:47 compute-0 sudo[95748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:39:48 compute-0 python3[95750]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:39:48 compute-0 podman[95751]: 2025-11-26 11:39:48.073568631 +0000 UTC m=+0.027374218 container create 01e529e672fc8165d1a7da5913791dbe23bf5edca72a6dfc6e0fc6b3b684b09a (image=quay.io/ceph/ceph:v18, name=kind_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 26 11:39:48 compute-0 systemd[1]: Started libpod-conmon-01e529e672fc8165d1a7da5913791dbe23bf5edca72a6dfc6e0fc6b3b684b09a.scope.
Nov 26 11:39:48 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:39:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d680a991fc7424c5c974e1722d84cf335d9b64d33abede3b142fe9164f1f90a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d680a991fc7424c5c974e1722d84cf335d9b64d33abede3b142fe9164f1f90a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:48 compute-0 podman[95751]: 2025-11-26 11:39:48.121544388 +0000 UTC m=+0.075349995 container init 01e529e672fc8165d1a7da5913791dbe23bf5edca72a6dfc6e0fc6b3b684b09a (image=quay.io/ceph/ceph:v18, name=kind_shamir, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:39:48 compute-0 podman[95751]: 2025-11-26 11:39:48.127824324 +0000 UTC m=+0.081629911 container start 01e529e672fc8165d1a7da5913791dbe23bf5edca72a6dfc6e0fc6b3b684b09a (image=quay.io/ceph/ceph:v18, name=kind_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 26 11:39:48 compute-0 podman[95751]: 2025-11-26 11:39:48.128862757 +0000 UTC m=+0.082668345 container attach 01e529e672fc8165d1a7da5913791dbe23bf5edca72a6dfc6e0fc6b3b684b09a (image=quay.io/ceph/ceph:v18, name=kind_shamir, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 26 11:39:48 compute-0 podman[95751]: 2025-11-26 11:39:48.061958805 +0000 UTC m=+0.015764412 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:39:48 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0) v1
Nov 26 11:39:48 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3751016468' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Nov 26 11:39:48 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Nov 26 11:39:48 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Nov 26 11:39:48 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Nov 26 11:39:48 compute-0 ceph-mon[74928]: pgmap v50: 100 pgs: 7 active+clean, 93 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:39:48 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 11:39:48 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/816091612' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Nov 26 11:39:48 compute-0 ceph-mon[74928]: osdmap e27: 3 total, 3 up, 3 in
Nov 26 11:39:48 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3751016468' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Nov 26 11:39:48 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3751016468' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Nov 26 11:39:48 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Nov 26 11:39:48 compute-0 kind_shamir[95763]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Nov 26 11:39:48 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Nov 26 11:39:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 28 pg[4.1e( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 28 pg[4.1d( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 28 pg[4.1f( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 28 pg[4.1c( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 28 pg[4.8( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 28 pg[4.7( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 28 pg[4.b( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 28 pg[4.6( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 28 pg[4.a( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 28 pg[4.5( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 28 pg[4.1a( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 28 pg[4.9( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 28 pg[4.4( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 28 pg[4.19( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 28 pg[4.3( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 28 pg[4.1( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 28 pg[4.2( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 28 pg[4.c( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 28 pg[4.1b( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 28 pg[4.d( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 28 pg[4.e( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 28 pg[4.f( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 28 pg[4.11( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 28 pg[4.10( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 28 pg[4.12( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 28 pg[4.13( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 28 pg[4.14( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 28 pg[4.15( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 28 pg[4.16( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 28 pg[4.17( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 28 pg[4.18( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 28 pg[4.1e( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 28 pg[4.1d( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 28 pg[4.1c( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 28 pg[4.8( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 28 pg[4.7( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 28 pg[4.b( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 28 pg[4.a( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 28 pg[4.5( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 28 pg[4.1a( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 28 pg[4.9( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 28 pg[4.4( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 28 pg[4.1f( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 28 pg[4.19( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 28 pg[4.6( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 28 pg[4.3( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 28 pg[4.2( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 28 pg[4.0( empty local-lis/les=27/28 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 28 pg[4.1b( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 28 pg[4.e( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 28 pg[4.f( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 28 pg[4.11( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 28 pg[4.d( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 28 pg[4.10( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 28 pg[4.12( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 28 pg[4.13( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 28 pg[4.15( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 28 pg[4.14( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 28 pg[4.17( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 28 pg[4.16( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 28 pg[4.1( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 28 pg[4.18( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 28 pg[4.c( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:48 compute-0 systemd[1]: libpod-01e529e672fc8165d1a7da5913791dbe23bf5edca72a6dfc6e0fc6b3b684b09a.scope: Deactivated successfully.
Nov 26 11:39:48 compute-0 podman[95751]: 2025-11-26 11:39:48.789509418 +0000 UTC m=+0.743315025 container died 01e529e672fc8165d1a7da5913791dbe23bf5edca72a6dfc6e0fc6b3b684b09a (image=quay.io/ceph/ceph:v18, name=kind_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 26 11:39:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d680a991fc7424c5c974e1722d84cf335d9b64d33abede3b142fe9164f1f90a-merged.mount: Deactivated successfully.
Nov 26 11:39:48 compute-0 podman[95751]: 2025-11-26 11:39:48.810460216 +0000 UTC m=+0.764265803 container remove 01e529e672fc8165d1a7da5913791dbe23bf5edca72a6dfc6e0fc6b3b684b09a (image=quay.io/ceph/ceph:v18, name=kind_shamir, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 26 11:39:48 compute-0 systemd[1]: libpod-conmon-01e529e672fc8165d1a7da5913791dbe23bf5edca72a6dfc6e0fc6b3b684b09a.scope: Deactivated successfully.
Nov 26 11:39:48 compute-0 sudo[95748]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:49 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v53: 131 pgs: 100 active+clean, 31 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:39:49 compute-0 python3[95873]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 11:39:49 compute-0 ceph-mon[74928]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 2 pool(s) do not have an application enabled)
Nov 26 11:39:49 compute-0 ceph-mon[74928]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 26 11:39:49 compute-0 ceph-mon[74928]: 2.1 scrub starts
Nov 26 11:39:49 compute-0 ceph-mon[74928]: 2.1 scrub ok
Nov 26 11:39:49 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3751016468' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Nov 26 11:39:49 compute-0 ceph-mon[74928]: osdmap e28: 3 total, 3 up, 3 in
Nov 26 11:39:49 compute-0 ceph-mon[74928]: pgmap v53: 131 pgs: 100 active+clean, 31 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:39:49 compute-0 python3[95944]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764157189.370294-36974-83066129990635/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=0a1ea65aada399f80274d3cc2047646f2797712b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:39:50 compute-0 sudo[96044]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilzimxwjiwokptzmzukctowteakxamru ; /usr/bin/python3'
Nov 26 11:39:50 compute-0 sudo[96044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:39:50 compute-0 python3[96046]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 11:39:50 compute-0 sudo[96044]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:50 compute-0 sudo[96119]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iemysopbkjwxeynrbrqerudhtmjzfjmo ; /usr/bin/python3'
Nov 26 11:39:50 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Nov 26 11:39:50 compute-0 sudo[96119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:39:50 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Nov 26 11:39:50 compute-0 python3[96121]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764157190.0761588-36988-116010418627928/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=b9233032cae0409b2b9784e302f2cb5590747ecd backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:39:50 compute-0 sudo[96119]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:50 compute-0 sudo[96169]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjsrjquovgtsxaadspmdctubeochotty ; /usr/bin/python3'
Nov 26 11:39:50 compute-0 sudo[96169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:39:50 compute-0 ceph-mon[74928]: Health check cleared: POOL_APP_NOT_ENABLED (was: 2 pool(s) do not have an application enabled)
Nov 26 11:39:50 compute-0 ceph-mon[74928]: Cluster is now healthy
Nov 26 11:39:50 compute-0 ceph-mon[74928]: 3.1 scrub starts
Nov 26 11:39:50 compute-0 ceph-mon[74928]: 3.1 scrub ok
Nov 26 11:39:50 compute-0 python3[96171]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:39:50 compute-0 podman[96172]: 2025-11-26 11:39:50.875367496 +0000 UTC m=+0.026799790 container create 7326c0ba5e0cea4968fe5a2012a612c0ac68f2fc04f3600353ac962fb6235441 (image=quay.io/ceph/ceph:v18, name=gallant_euclid, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:39:50 compute-0 systemd[1]: Started libpod-conmon-7326c0ba5e0cea4968fe5a2012a612c0ac68f2fc04f3600353ac962fb6235441.scope.
Nov 26 11:39:50 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:39:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b26d5ccd8d46ade7c399e82142880ce6b798997b5fc01aa27e19eac029dfe0e1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b26d5ccd8d46ade7c399e82142880ce6b798997b5fc01aa27e19eac029dfe0e1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b26d5ccd8d46ade7c399e82142880ce6b798997b5fc01aa27e19eac029dfe0e1/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:50 compute-0 podman[96172]: 2025-11-26 11:39:50.924156678 +0000 UTC m=+0.075588991 container init 7326c0ba5e0cea4968fe5a2012a612c0ac68f2fc04f3600353ac962fb6235441 (image=quay.io/ceph/ceph:v18, name=gallant_euclid, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:39:50 compute-0 podman[96172]: 2025-11-26 11:39:50.927815987 +0000 UTC m=+0.079248280 container start 7326c0ba5e0cea4968fe5a2012a612c0ac68f2fc04f3600353ac962fb6235441 (image=quay.io/ceph/ceph:v18, name=gallant_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Nov 26 11:39:50 compute-0 podman[96172]: 2025-11-26 11:39:50.928876012 +0000 UTC m=+0.080308306 container attach 7326c0ba5e0cea4968fe5a2012a612c0ac68f2fc04f3600353ac962fb6235441 (image=quay.io/ceph/ceph:v18, name=gallant_euclid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:39:50 compute-0 podman[96172]: 2025-11-26 11:39:50.864547749 +0000 UTC m=+0.015980063 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:39:51 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Nov 26 11:39:51 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/116847655' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 26 11:39:51 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/116847655' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 26 11:39:51 compute-0 gallant_euclid[96184]: 
Nov 26 11:39:51 compute-0 gallant_euclid[96184]: [global]
Nov 26 11:39:51 compute-0 gallant_euclid[96184]:         fsid = ebab460c-3fd7-5f66-aa87-e10c143123f7
Nov 26 11:39:51 compute-0 gallant_euclid[96184]:         mon_host = 192.168.122.100
Nov 26 11:39:51 compute-0 systemd[1]: libpod-7326c0ba5e0cea4968fe5a2012a612c0ac68f2fc04f3600353ac962fb6235441.scope: Deactivated successfully.
Nov 26 11:39:51 compute-0 podman[96172]: 2025-11-26 11:39:51.367945032 +0000 UTC m=+0.519377325 container died 7326c0ba5e0cea4968fe5a2012a612c0ac68f2fc04f3600353ac962fb6235441 (image=quay.io/ceph/ceph:v18, name=gallant_euclid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:39:51 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v54: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:39:51 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 26 11:39:51 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 11:39:51 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 26 11:39:51 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 11:39:51 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 26 11:39:51 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 11:39:51 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 26 11:39:51 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 11:39:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-b26d5ccd8d46ade7c399e82142880ce6b798997b5fc01aa27e19eac029dfe0e1-merged.mount: Deactivated successfully.
Nov 26 11:39:51 compute-0 podman[96172]: 2025-11-26 11:39:51.397294801 +0000 UTC m=+0.548727095 container remove 7326c0ba5e0cea4968fe5a2012a612c0ac68f2fc04f3600353ac962fb6235441 (image=quay.io/ceph/ceph:v18, name=gallant_euclid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 26 11:39:51 compute-0 systemd[1]: libpod-conmon-7326c0ba5e0cea4968fe5a2012a612c0ac68f2fc04f3600353ac962fb6235441.scope: Deactivated successfully.
Nov 26 11:39:51 compute-0 sudo[96209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:39:51 compute-0 sudo[96209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:51 compute-0 sudo[96209]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:51 compute-0 sudo[96169]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:51 compute-0 sudo[96244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:39:51 compute-0 sudo[96244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:51 compute-0 sudo[96244]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:51 compute-0 sudo[96269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:39:51 compute-0 sudo[96269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:51 compute-0 sudo[96269]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:51 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e28 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:39:51 compute-0 sudo[96294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 26 11:39:51 compute-0 sudo[96294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:51 compute-0 sudo[96340]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nosfjlnifkthtrrgqyefjouoqexnwypp ; /usr/bin/python3'
Nov 26 11:39:51 compute-0 sudo[96340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:39:51 compute-0 python3[96344]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:39:51 compute-0 podman[96355]: 2025-11-26 11:39:51.693652164 +0000 UTC m=+0.030529641 container create 944ec68d0e72e44076062b0039f36a4832cb965609eee78453d6a91d0bce7b8f (image=quay.io/ceph/ceph:v18, name=cool_kepler, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 26 11:39:51 compute-0 systemd[1]: Started libpod-conmon-944ec68d0e72e44076062b0039f36a4832cb965609eee78453d6a91d0bce7b8f.scope.
Nov 26 11:39:51 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:39:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c21836e17d36bc55829f219cba04b4435b46ebbce7456d2a627cbab91decb54f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c21836e17d36bc55829f219cba04b4435b46ebbce7456d2a627cbab91decb54f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c21836e17d36bc55829f219cba04b4435b46ebbce7456d2a627cbab91decb54f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:51 compute-0 podman[96355]: 2025-11-26 11:39:51.747887333 +0000 UTC m=+0.084764830 container init 944ec68d0e72e44076062b0039f36a4832cb965609eee78453d6a91d0bce7b8f (image=quay.io/ceph/ceph:v18, name=cool_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:39:51 compute-0 podman[96355]: 2025-11-26 11:39:51.75204262 +0000 UTC m=+0.088920097 container start 944ec68d0e72e44076062b0039f36a4832cb965609eee78453d6a91d0bce7b8f (image=quay.io/ceph/ceph:v18, name=cool_kepler, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:39:51 compute-0 podman[96355]: 2025-11-26 11:39:51.753290886 +0000 UTC m=+0.090168362 container attach 944ec68d0e72e44076062b0039f36a4832cb965609eee78453d6a91d0bce7b8f (image=quay.io/ceph/ceph:v18, name=cool_kepler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 26 11:39:51 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/116847655' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 26 11:39:51 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/116847655' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 26 11:39:51 compute-0 ceph-mon[74928]: pgmap v54: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:39:51 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 11:39:51 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 11:39:51 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 11:39:51 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 11:39:51 compute-0 podman[96355]: 2025-11-26 11:39:51.680013366 +0000 UTC m=+0.016890864 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:39:51 compute-0 podman[96417]: 2025-11-26 11:39:51.89567208 +0000 UTC m=+0.039252987 container exec 810eaed6cbde00330c0646cedaef5c2ae94579236d67ba42b8ef02d055d04ad5 (image=quay.io/ceph/ceph:v18, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 26 11:39:51 compute-0 podman[96417]: 2025-11-26 11:39:51.970881182 +0000 UTC m=+0.114462069 container exec_died 810eaed6cbde00330c0646cedaef5c2ae94579236d67ba42b8ef02d055d04ad5 (image=quay.io/ceph/ceph:v18, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mon-compute-0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 26 11:39:52 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0) v1
Nov 26 11:39:52 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3133095100' entity='client.admin' 
Nov 26 11:39:52 compute-0 cool_kepler[96387]: set ssl_option
Nov 26 11:39:52 compute-0 sudo[96294]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:52 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 11:39:52 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:52 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 11:39:52 compute-0 systemd[1]: libpod-944ec68d0e72e44076062b0039f36a4832cb965609eee78453d6a91d0bce7b8f.scope: Deactivated successfully.
Nov 26 11:39:52 compute-0 podman[96355]: 2025-11-26 11:39:52.279778758 +0000 UTC m=+0.616656235 container died 944ec68d0e72e44076062b0039f36a4832cb965609eee78453d6a91d0bce7b8f (image=quay.io/ceph/ceph:v18, name=cool_kepler, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 26 11:39:52 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-c21836e17d36bc55829f219cba04b4435b46ebbce7456d2a627cbab91decb54f-merged.mount: Deactivated successfully.
Nov 26 11:39:52 compute-0 podman[96355]: 2025-11-26 11:39:52.303916211 +0000 UTC m=+0.640793687 container remove 944ec68d0e72e44076062b0039f36a4832cb965609eee78453d6a91d0bce7b8f (image=quay.io/ceph/ceph:v18, name=cool_kepler, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:39:52 compute-0 systemd[1]: libpod-conmon-944ec68d0e72e44076062b0039f36a4832cb965609eee78453d6a91d0bce7b8f.scope: Deactivated successfully.
Nov 26 11:39:52 compute-0 sudo[96340]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:52 compute-0 sudo[96539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:39:52 compute-0 sudo[96539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:52 compute-0 sudo[96539]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:52 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Nov 26 11:39:52 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 11:39:52 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 11:39:52 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 11:39:52 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 11:39:52 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Nov 26 11:39:52 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[5.1d( empty local-lis/les=25/27 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29 pruub=11.424134254s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 35.462455750s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[5.1d( empty local-lis/les=25/27 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29 pruub=11.424070358s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 35.462455750s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[2.18( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=8.391603470s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 32.430438995s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[2.17( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=8.391586304s) [1] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 32.430435181s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[5.1e( empty local-lis/les=25/27 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29 pruub=11.425021172s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 35.463840485s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[2.18( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=8.391575813s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 32.430438995s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[2.16( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=8.391448021s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 32.430427551s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[2.1b( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=8.391710281s) [1] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 32.430484772s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[2.19( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=8.391483307s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 32.430507660s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[2.17( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=8.391314507s) [1] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 32.430435181s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[5.1e( empty local-lis/les=25/27 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29 pruub=11.424699783s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 35.463840485s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[2.16( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=8.391284943s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 32.430427551s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[5.11( empty local-lis/les=25/27 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29 pruub=11.424719810s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 35.464038849s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[5.11( empty local-lis/les=25/27 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29 pruub=11.424683571s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 35.464038849s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[2.15( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=8.391061783s) [1] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 32.430461884s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[2.15( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=8.391042709s) [1] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 32.430461884s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[5.13( empty local-lis/les=25/27 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29 pruub=11.424554825s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 35.464092255s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[5.13( empty local-lis/les=25/27 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29 pruub=11.424535751s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 35.464092255s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[5.12( empty local-lis/les=25/27 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29 pruub=11.424484253s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 35.464073181s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[5.12( empty local-lis/les=25/27 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29 pruub=11.424460411s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 35.464073181s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[2.13( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=8.390803337s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 32.430454254s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[2.13( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=8.390780449s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 32.430454254s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[5.14( empty local-lis/les=25/27 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29 pruub=11.424388885s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 35.464103699s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[5.14( empty local-lis/les=25/27 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29 pruub=11.424368858s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 35.464103699s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[5.15( empty local-lis/les=25/27 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29 pruub=11.424364090s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 35.464138031s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[5.15( empty local-lis/les=25/27 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29 pruub=11.424343109s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 35.464138031s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[2.11( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=8.390585899s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 32.430400848s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[2.11( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=8.390573502s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 32.430400848s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[5.16( empty local-lis/les=25/27 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29 pruub=11.424276352s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 35.464138031s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[5.16( empty local-lis/les=25/27 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29 pruub=11.424263954s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 35.464138031s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[2.f( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=8.390448570s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 32.430358887s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[2.f( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=8.390436172s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 32.430358887s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[5.9( empty local-lis/les=25/27 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29 pruub=11.424208641s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 35.464195251s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[2.d( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=8.386695862s) [1] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 32.426704407s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[2.1b( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=8.390478134s) [1] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 32.430484772s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[2.d( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=8.386682510s) [1] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 32.426704407s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[5.9( empty local-lis/les=25/27 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29 pruub=11.424113274s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 35.464195251s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[5.7( empty local-lis/les=25/27 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29 pruub=11.424057007s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 35.464187622s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[5.7( empty local-lis/les=25/27 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29 pruub=11.424044609s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 35.464187622s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[2.7( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=8.390146255s) [1] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 32.430339813s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[2.7( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=8.390130997s) [1] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 32.430339813s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[2.2( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=8.386445045s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 32.426685333s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[2.2( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=8.386431694s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 32.426685333s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[5.5( empty local-lis/les=25/27 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29 pruub=11.423907280s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 35.464225769s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[5.5( empty local-lis/les=25/27 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29 pruub=11.423891068s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 35.464225769s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[2.3( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=8.386317253s) [1] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 32.426673889s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[2.3( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=8.386302948s) [1] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 32.426673889s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[5.4( empty local-lis/les=25/27 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29 pruub=11.423810005s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 35.464233398s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[2.4( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=8.386237144s) [1] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 32.426666260s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[5.4( empty local-lis/les=25/27 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29 pruub=11.423793793s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 35.464233398s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[2.4( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=8.386221886s) [1] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 32.426666260s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[5.3( empty local-lis/les=25/27 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29 pruub=11.423730850s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 35.464248657s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[2.5( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=8.386144638s) [1] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 32.426662445s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[5.3( empty local-lis/les=25/27 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29 pruub=11.423717499s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 35.464248657s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[2.5( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=8.386131287s) [1] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 32.426662445s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[2.19( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=8.389882088s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 32.430507660s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[2.6( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=8.386022568s) [1] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 32.426635742s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[5.2( empty local-lis/les=25/27 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29 pruub=11.423613548s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 35.464317322s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[2.6( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=8.385922432s) [1] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 32.426635742s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[5.2( empty local-lis/les=25/27 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29 pruub=11.423598289s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 35.464317322s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[2.8( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=8.386013985s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 32.426826477s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[2.8( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=8.385995865s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 32.426826477s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[5.f( empty local-lis/les=25/27 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29 pruub=11.423501968s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 35.464370728s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[5.f( empty local-lis/les=25/27 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29 pruub=11.423475266s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 35.464370728s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[2.9( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=8.389575005s) [1] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 32.430496216s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[2.9( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=8.389561653s) [1] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 32.430496216s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[2.a( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=8.385463715s) [1] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 32.426479340s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[2.a( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=8.385437965s) [1] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 32.426479340s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[5.c( empty local-lis/les=25/27 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29 pruub=11.423236847s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 35.464359283s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[5.c( empty local-lis/les=25/27 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29 pruub=11.423218727s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 35.464359283s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[2.1c( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=8.385118484s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 32.426345825s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[2.1c( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=8.385100365s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 32.426345825s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[2.1d( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=8.385342598s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 32.426692963s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[2.1d( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=8.385330200s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 32.426692963s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[5.1a( empty local-lis/les=25/27 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29 pruub=11.422898293s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 35.464317322s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[5.1a( empty local-lis/les=25/27 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29 pruub=11.422886848s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 35.464317322s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[5.19( empty local-lis/les=25/27 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29 pruub=11.422828674s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 35.464340210s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[5.19( empty local-lis/les=25/27 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29 pruub=11.422808647s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 35.464340210s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[2.1f( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=8.383994102s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 32.425579071s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[2.1f( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=8.383982658s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 32.425579071s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[5.18( empty local-lis/les=25/27 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29 pruub=11.422693253s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 35.464351654s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[5.18( empty local-lis/les=25/27 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29 pruub=11.422660828s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 35.464351654s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[5.1( empty local-lis/les=25/27 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29 pruub=11.422380447s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 35.464298248s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[5.1( empty local-lis/les=25/27 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29 pruub=11.422354698s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 35.464298248s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[2.b( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=8.384468079s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 32.426631927s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[2.b( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=8.384445190s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 32.426631927s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[2.11( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[2.1b( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [1] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[5.11( empty local-lis/les=0/0 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[2.17( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [1] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[5.13( empty local-lis/les=0/0 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[2.15( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [1] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[5.12( empty local-lis/les=0/0 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[5.16( empty local-lis/les=0/0 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[5.9( empty local-lis/les=0/0 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[2.d( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [1] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[2.a( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [1] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[2.3( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [1] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[2.5( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [1] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[2.4( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [1] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[2.7( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [1] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[2.6( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [1] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[5.1( empty local-lis/les=0/0 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[2.9( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [1] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[5.f( empty local-lis/les=0/0 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[5.c( empty local-lis/les=0/0 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[4.1c( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [2] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[5.1d( empty local-lis/les=0/0 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[5.1a( empty local-lis/les=0/0 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[4.1b( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [2] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[4.a( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [2] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[4.1a( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [2] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[2.13( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[5.18( empty local-lis/les=0/0 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[5.19( empty local-lis/les=0/0 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[5.14( empty local-lis/les=0/0 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[5.15( empty local-lis/les=0/0 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[3.1e( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=10.405418396s) [2] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 38.283981323s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[3.1e( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=10.405396461s) [2] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 38.283981323s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[3.1f( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=10.405518532s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 38.284107208s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[2.16( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[3.1f( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=10.405449867s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 38.284107208s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[3.1d( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=10.404771805s) [2] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 38.283454895s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[3.1d( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=10.404709816s) [2] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 38.283454895s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[2.8( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[2.b( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[5.3( empty local-lis/les=0/0 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[5.2( empty local-lis/les=0/0 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[2.1f( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[2.2( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[5.5( empty local-lis/les=0/0 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[2.f( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[2.1c( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[5.4( empty local-lis/les=0/0 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[2.1d( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[5.7( empty local-lis/les=0/0 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[2.18( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[5.1e( empty local-lis/les=0/0 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[2.19( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[4.1c( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=12.421730995s) [2] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active pruub 43.781982422s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[4.1c( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=12.421714783s) [2] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 43.781982422s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[4.8( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=12.421815872s) [1] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active pruub 43.782157898s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[4.8( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=12.421803474s) [1] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 43.782157898s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[4.7( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=12.421780586s) [1] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active pruub 43.782199860s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[4.7( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=12.421590805s) [1] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 43.782199860s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[4.1b( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=12.422925949s) [2] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active pruub 43.783664703s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[4.1b( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=12.422910690s) [2] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 43.783664703s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[4.a( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=12.422674179s) [2] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active pruub 43.783500671s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[4.a( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=12.422661781s) [2] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 43.783500671s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[4.5( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=12.422614098s) [1] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active pruub 43.783504486s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[4.5( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=12.422602654s) [1] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 43.783504486s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[4.1a( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=12.422550201s) [2] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active pruub 43.783515930s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[4.1a( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=12.422540665s) [2] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 43.783515930s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[4.9( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=12.422473907s) [1] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active pruub 43.783527374s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[4.9( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=12.422461510s) [1] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 43.783527374s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[4.4( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=12.422412872s) [1] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active pruub 43.783542633s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[4.4( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=12.422402382s) [1] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 43.783542633s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[3.1b( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=10.405232430s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 38.284049988s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[3.a( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=10.405475616s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 38.284305573s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[3.1b( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=10.405215263s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 38.284049988s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[3.a( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=10.405461311s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 38.284305573s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[3.9( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=10.405196190s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 38.284111023s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[3.9( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=10.405182838s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 38.284111023s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[4.8( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[4.7( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[4.5( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[3.7( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=10.404975891s) [2] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 38.284458160s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[3.7( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=10.404959679s) [2] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 38.284458160s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[3.6( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=10.404919624s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 38.284454346s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[3.8( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=10.404941559s) [2] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 38.284435272s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[3.6( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=10.404895782s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 38.284454346s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[4.1( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=12.418736458s) [2] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active pruub 43.784187317s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[4.1( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=12.418717384s) [2] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 43.784187317s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[4.2( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=12.418028831s) [1] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active pruub 43.783576965s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[4.2( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=12.418015480s) [1] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 43.783576965s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[4.d( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=12.418445587s) [1] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active pruub 43.784107208s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[4.d( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=12.418433189s) [1] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 43.784107208s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[4.e( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=12.418337822s) [2] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active pruub 43.784080505s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[4.e( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=12.418325424s) [2] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 43.784080505s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[4.f( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=12.418291092s) [1] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active pruub 43.784107208s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[4.f( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=12.418280602s) [1] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 43.784107208s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[4.10( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=12.418243408s) [1] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active pruub 43.784137726s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[4.10( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=12.418232918s) [1] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 43.784137726s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[3.5( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=10.405117989s) [2] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 38.284694672s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[4.11( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=12.418143272s) [2] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active pruub 43.784114838s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[4.11( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=12.418131828s) [2] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 43.784114838s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[4.12( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=12.418073654s) [1] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active pruub 43.784145355s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[4.12( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=12.418059349s) [1] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 43.784145355s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[3.8( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=10.404858589s) [2] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 38.284435272s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[3.5( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=10.405104637s) [2] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 38.284694672s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[3.1( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=10.404872894s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 38.284549713s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[3.1( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=10.404855728s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 38.284549713s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[3.e( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=10.405397415s) [2] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 38.285137177s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[3.e( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=10.405381203s) [2] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 38.285137177s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[3.f( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=10.405363083s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 38.285152435s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[3.c( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=10.405272484s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 38.285057068s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[3.1e( empty local-lis/les=0/0 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29) [2] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[3.1d( empty local-lis/les=0/0 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29) [2] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[4.13( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=12.417990685s) [2] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active pruub 43.784149170s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[4.13( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=12.417978287s) [2] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 43.784149170s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[4.14( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=12.417939186s) [1] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active pruub 43.784168243s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[4.14( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=12.417920113s) [1] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 43.784168243s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[4.18( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=12.417824745s) [2] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active pruub 43.784172058s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[4.18( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=12.417813301s) [2] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 43.784172058s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[3.1f( empty local-lis/les=0/0 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[3.1b( empty local-lis/les=0/0 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[3.a( empty local-lis/les=0/0 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[3.9( empty local-lis/les=0/0 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[3.6( empty local-lis/les=0/0 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[3.7( empty local-lis/les=0/0 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29) [2] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[3.f( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=10.405345917s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 38.285152435s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[3.1( empty local-lis/les=0/0 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[3.f( empty local-lis/les=0/0 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[3.c( empty local-lis/les=0/0 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[3.5( empty local-lis/les=0/0 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29) [2] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[3.3( empty local-lis/les=0/0 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[3.c( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=10.405253410s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 38.285057068s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[3.11( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=10.405647278s) [2] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 38.285484314s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[3.8( empty local-lis/les=0/0 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29) [2] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[3.e( empty local-lis/les=0/0 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29) [2] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[3.3( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=10.404684067s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 38.284534454s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[3.12( empty local-lis/les=0/0 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[3.15( empty local-lis/les=0/0 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 29 pg[3.17( empty local-lis/les=0/0 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[3.11( empty local-lis/les=0/0 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29) [2] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[3.11( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=10.405634880s) [2] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 38.285484314s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[3.3( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=10.404659271s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 38.284534454s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[3.18( empty local-lis/les=0/0 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29) [2] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[3.12( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=10.405313492s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 38.285205841s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[3.16( empty local-lis/les=0/0 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29) [2] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[3.12( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=10.405299187s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 38.285205841s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[3.15( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=10.405321121s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 38.285282135s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[3.16( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=10.405454636s) [2] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 38.285430908s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[3.18( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=10.405467033s) [2] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 38.285442352s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[4.1( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [2] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[3.17( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=10.405419350s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 38.285400391s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[3.15( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=10.405304909s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 38.285282135s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[3.16( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=10.405432701s) [2] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 38.285430908s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[3.18( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=10.405447006s) [2] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 38.285442352s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[3.17( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=10.405401230s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 38.285400391s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[4.9( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[4.4( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[4.2( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[4.d( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[4.f( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[4.10( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[4.12( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 29 pg[4.14( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[4.e( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [2] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[4.11( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [2] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[4.13( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [2] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 29 pg[4.18( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [2] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:39:52 compute-0 sudo[96574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:39:52 compute-0 sudo[96574]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:52 compute-0 sudo[96574]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:52 compute-0 sudo[96635]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkvfgzzjydcmrhcokidcpmosmxpkxmno ; /usr/bin/python3'
Nov 26 11:39:52 compute-0 sudo[96635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:39:52 compute-0 sudo[96607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:39:52 compute-0 sudo[96607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:52 compute-0 sudo[96607]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:52 compute-0 sudo[96650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 26 11:39:52 compute-0 sudo[96650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:52 compute-0 python3[96647]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:39:52 compute-0 podman[96675]: 2025-11-26 11:39:52.574778559 +0000 UTC m=+0.026694527 container create 56d4a488972e97248a995e49b73bf99e257c5c4f99eca64276934d4e37012ff3 (image=quay.io/ceph/ceph:v18, name=upbeat_brattain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:39:52 compute-0 systemd[1]: Started libpod-conmon-56d4a488972e97248a995e49b73bf99e257c5c4f99eca64276934d4e37012ff3.scope.
Nov 26 11:39:52 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:39:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f3a714052595699577716fcab8f23f91669489d95ddfac268f35f79d5a85426/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f3a714052595699577716fcab8f23f91669489d95ddfac268f35f79d5a85426/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f3a714052595699577716fcab8f23f91669489d95ddfac268f35f79d5a85426/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:52 compute-0 podman[96675]: 2025-11-26 11:39:52.633133106 +0000 UTC m=+0.085049086 container init 56d4a488972e97248a995e49b73bf99e257c5c4f99eca64276934d4e37012ff3 (image=quay.io/ceph/ceph:v18, name=upbeat_brattain, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 26 11:39:52 compute-0 podman[96675]: 2025-11-26 11:39:52.637453957 +0000 UTC m=+0.089369926 container start 56d4a488972e97248a995e49b73bf99e257c5c4f99eca64276934d4e37012ff3 (image=quay.io/ceph/ceph:v18, name=upbeat_brattain, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:39:52 compute-0 podman[96675]: 2025-11-26 11:39:52.638620268 +0000 UTC m=+0.090536236 container attach 56d4a488972e97248a995e49b73bf99e257c5c4f99eca64276934d4e37012ff3 (image=quay.io/ceph/ceph:v18, name=upbeat_brattain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 26 11:39:52 compute-0 podman[96675]: 2025-11-26 11:39:52.564099643 +0000 UTC m=+0.016015633 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:39:52 compute-0 sudo[96650]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:52 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 11:39:52 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:39:52 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 11:39:52 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 11:39:52 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 11:39:52 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:52 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 6b3b2633-5dac-4e83-92d6-7e2270ec66d1 does not exist
Nov 26 11:39:52 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 35fae0ff-8032-447d-8637-eed797106e42 does not exist
Nov 26 11:39:52 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev f4c9505a-6505-4772-8466-77c28971c36b does not exist
Nov 26 11:39:52 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 11:39:52 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 11:39:52 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 11:39:52 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 11:39:52 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 11:39:52 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:39:52 compute-0 sudo[96720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:39:52 compute-0 sudo[96720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:52 compute-0 sudo[96720]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:52 compute-0 sudo[96745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:39:52 compute-0 sudo[96745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:52 compute-0 sudo[96745]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:52 compute-0 sudo[96789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:39:52 compute-0 sudo[96789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:52 compute-0 sudo[96789]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:53 compute-0 sudo[96814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 26 11:39:53 compute-0 sudo[96814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:53 compute-0 ceph-mgr[75197]: log_channel(audit) log [DBG] : from='client.14244 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:39:53 compute-0 ceph-mgr[75197]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0
Nov 26 11:39:53 compute-0 ceph-mgr[75197]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Nov 26 11:39:53 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 26 11:39:53 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:53 compute-0 upbeat_brattain[96694]: Scheduled rgw.rgw update...
Nov 26 11:39:53 compute-0 systemd[1]: libpod-56d4a488972e97248a995e49b73bf99e257c5c4f99eca64276934d4e37012ff3.scope: Deactivated successfully.
Nov 26 11:39:53 compute-0 podman[96675]: 2025-11-26 11:39:53.097695856 +0000 UTC m=+0.549611835 container died 56d4a488972e97248a995e49b73bf99e257c5c4f99eca64276934d4e37012ff3 (image=quay.io/ceph/ceph:v18, name=upbeat_brattain, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Nov 26 11:39:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f3a714052595699577716fcab8f23f91669489d95ddfac268f35f79d5a85426-merged.mount: Deactivated successfully.
Nov 26 11:39:53 compute-0 podman[96675]: 2025-11-26 11:39:53.129579309 +0000 UTC m=+0.581495278 container remove 56d4a488972e97248a995e49b73bf99e257c5c4f99eca64276934d4e37012ff3 (image=quay.io/ceph/ceph:v18, name=upbeat_brattain, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 26 11:39:53 compute-0 systemd[1]: libpod-conmon-56d4a488972e97248a995e49b73bf99e257c5c4f99eca64276934d4e37012ff3.scope: Deactivated successfully.
Nov 26 11:39:53 compute-0 sudo[96635]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:53 compute-0 podman[96884]: 2025-11-26 11:39:53.248908078 +0000 UTC m=+0.026980247 container create 9bc314053edd206684ece8068c3d031dad43e48d5f7633fc317e8ef4a5fe76ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 26 11:39:53 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3133095100' entity='client.admin' 
Nov 26 11:39:53 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:53 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:53 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 11:39:53 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 11:39:53 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 11:39:53 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 11:39:53 compute-0 ceph-mon[74928]: osdmap e29: 3 total, 3 up, 3 in
Nov 26 11:39:53 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:39:53 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 11:39:53 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:53 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 11:39:53 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 11:39:53 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:39:53 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:53 compute-0 systemd[1]: Started libpod-conmon-9bc314053edd206684ece8068c3d031dad43e48d5f7633fc317e8ef4a5fe76ef.scope.
Nov 26 11:39:53 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:39:53 compute-0 podman[96884]: 2025-11-26 11:39:53.306356697 +0000 UTC m=+0.084428886 container init 9bc314053edd206684ece8068c3d031dad43e48d5f7633fc317e8ef4a5fe76ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_ganguly, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:39:53 compute-0 podman[96884]: 2025-11-26 11:39:53.310222681 +0000 UTC m=+0.088294848 container start 9bc314053edd206684ece8068c3d031dad43e48d5f7633fc317e8ef4a5fe76ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:39:53 compute-0 podman[96884]: 2025-11-26 11:39:53.311465494 +0000 UTC m=+0.089537683 container attach 9bc314053edd206684ece8068c3d031dad43e48d5f7633fc317e8ef4a5fe76ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_ganguly, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 26 11:39:53 compute-0 pedantic_ganguly[96897]: 167 167
Nov 26 11:39:53 compute-0 systemd[1]: libpod-9bc314053edd206684ece8068c3d031dad43e48d5f7633fc317e8ef4a5fe76ef.scope: Deactivated successfully.
Nov 26 11:39:53 compute-0 conmon[96897]: conmon 9bc314053edd206684ec <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9bc314053edd206684ece8068c3d031dad43e48d5f7633fc317e8ef4a5fe76ef.scope/container/memory.events
Nov 26 11:39:53 compute-0 podman[96884]: 2025-11-26 11:39:53.314725745 +0000 UTC m=+0.092797913 container died 9bc314053edd206684ece8068c3d031dad43e48d5f7633fc317e8ef4a5fe76ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 26 11:39:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8a3891d204f3bf53cb293a817648b644885924036d4d7a136dabf689b36ab05-merged.mount: Deactivated successfully.
Nov 26 11:39:53 compute-0 podman[96884]: 2025-11-26 11:39:53.332714448 +0000 UTC m=+0.110786626 container remove 9bc314053edd206684ece8068c3d031dad43e48d5f7633fc317e8ef4a5fe76ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_ganguly, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 26 11:39:53 compute-0 podman[96884]: 2025-11-26 11:39:53.237480552 +0000 UTC m=+0.015552740 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:39:53 compute-0 systemd[1]: libpod-conmon-9bc314053edd206684ece8068c3d031dad43e48d5f7633fc317e8ef4a5fe76ef.scope: Deactivated successfully.
Nov 26 11:39:53 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Nov 26 11:39:53 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Nov 26 11:39:53 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Nov 26 11:39:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 30 pg[4.1c( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [2] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 30 pg[4.11( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [2] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 30 pg[3.18( empty local-lis/les=29/30 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29) [2] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 30 pg[4.13( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [2] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 30 pg[3.11( empty local-lis/les=29/30 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29) [2] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 30 pg[3.16( empty local-lis/les=29/30 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29) [2] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 30 pg[3.e( empty local-lis/les=29/30 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29) [2] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 30 pg[4.a( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [2] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 30 pg[4.1( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [2] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 30 pg[3.5( empty local-lis/les=29/30 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29) [2] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 30 pg[3.7( empty local-lis/les=29/30 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29) [2] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 30 pg[4.e( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [2] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 30 pg[3.8( empty local-lis/les=29/30 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29) [2] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 30 pg[4.1a( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [2] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 30 pg[4.1b( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [2] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 30 pg[3.1e( empty local-lis/les=29/30 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29) [2] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 30 pg[4.18( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [2] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 30 pg[3.1d( empty local-lis/les=29/30 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29) [2] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 30 pg[5.18( empty local-lis/les=29/30 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 30 pg[5.1a( empty local-lis/les=29/30 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 30 pg[5.19( empty local-lis/les=29/30 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 30 pg[5.c( empty local-lis/les=29/30 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 30 pg[5.f( empty local-lis/les=29/30 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 30 pg[2.9( empty local-lis/les=29/30 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [1] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 30 pg[5.1( empty local-lis/les=29/30 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 30 pg[2.7( empty local-lis/les=29/30 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [1] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 30 pg[2.3( empty local-lis/les=29/30 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [1] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 30 pg[2.5( empty local-lis/les=29/30 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [1] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 30 pg[2.6( empty local-lis/les=29/30 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [1] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 30 pg[2.a( empty local-lis/les=29/30 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [1] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 30 pg[2.d( empty local-lis/les=29/30 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [1] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 30 pg[5.12( empty local-lis/les=29/30 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 30 pg[5.1d( empty local-lis/les=29/30 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 30 pg[5.9( empty local-lis/les=29/30 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 30 pg[2.15( empty local-lis/les=29/30 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [1] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 30 pg[2.17( empty local-lis/les=29/30 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [1] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 30 pg[5.16( empty local-lis/les=29/30 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 30 pg[2.4( empty local-lis/les=29/30 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [1] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 30 pg[5.11( empty local-lis/les=29/30 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 30 pg[2.1b( empty local-lis/les=29/30 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [1] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 30 pg[5.13( empty local-lis/les=29/30 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 30 pg[4.14( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 30 pg[4.12( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 30 pg[4.10( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 30 pg[4.9( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 30 pg[4.5( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 30 pg[4.7( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 30 pg[4.8( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 30 pg[4.2( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 30 pg[4.4( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 30 pg[4.f( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 30 pg[4.d( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 30 pg[3.1f( empty local-lis/les=29/30 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 30 pg[2.11( empty local-lis/les=29/30 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 30 pg[3.12( empty local-lis/les=29/30 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 30 pg[5.14( empty local-lis/les=29/30 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 30 pg[5.15( empty local-lis/les=29/30 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 30 pg[3.15( empty local-lis/les=29/30 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 30 pg[2.13( empty local-lis/les=29/30 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 30 pg[3.17( empty local-lis/les=29/30 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 30 pg[2.16( empty local-lis/les=29/30 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 30 pg[2.8( empty local-lis/les=29/30 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 30 pg[3.9( empty local-lis/les=29/30 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 30 pg[2.b( empty local-lis/les=29/30 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 30 pg[3.a( empty local-lis/les=29/30 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 30 pg[5.3( empty local-lis/les=29/30 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 30 pg[5.2( empty local-lis/les=29/30 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 30 pg[3.6( empty local-lis/les=29/30 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 30 pg[2.2( empty local-lis/les=29/30 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 30 pg[5.5( empty local-lis/les=29/30 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 30 pg[2.1f( empty local-lis/les=29/30 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 30 pg[2.f( empty local-lis/les=29/30 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 30 pg[3.3( empty local-lis/les=29/30 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 30 pg[2.1c( empty local-lis/les=29/30 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 30 pg[2.1d( empty local-lis/les=29/30 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 30 pg[3.1( empty local-lis/les=29/30 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 30 pg[5.7( empty local-lis/les=29/30 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 30 pg[3.c( empty local-lis/les=29/30 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 30 pg[5.4( empty local-lis/les=29/30 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 30 pg[3.1b( empty local-lis/les=29/30 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 30 pg[3.f( empty local-lis/les=29/30 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 30 pg[2.18( empty local-lis/les=29/30 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 30 pg[5.1e( empty local-lis/les=29/30 n=0 ec=25/20 lis/c=25/25 les/c/f=27/27/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 30 pg[2.19( empty local-lis/les=29/30 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:39:53 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v57: 131 pgs: 18 peering, 113 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:39:53 compute-0 podman[96919]: 2025-11-26 11:39:53.460964876 +0000 UTC m=+0.029572476 container create 7bda33c498675f653347006ec1b4c88a46fdf820f70ef9d5abeab41b12c3595d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wu, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:39:53 compute-0 systemd[1]: Started libpod-conmon-7bda33c498675f653347006ec1b4c88a46fdf820f70ef9d5abeab41b12c3595d.scope.
Nov 26 11:39:53 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:39:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebeca430333fb60c4e5978e39a4235e909665065ffb4ef4cadea83db64c7d6a7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebeca430333fb60c4e5978e39a4235e909665065ffb4ef4cadea83db64c7d6a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebeca430333fb60c4e5978e39a4235e909665065ffb4ef4cadea83db64c7d6a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebeca430333fb60c4e5978e39a4235e909665065ffb4ef4cadea83db64c7d6a7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebeca430333fb60c4e5978e39a4235e909665065ffb4ef4cadea83db64c7d6a7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:53 compute-0 podman[96919]: 2025-11-26 11:39:53.531934913 +0000 UTC m=+0.100542512 container init 7bda33c498675f653347006ec1b4c88a46fdf820f70ef9d5abeab41b12c3595d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:39:53 compute-0 podman[96919]: 2025-11-26 11:39:53.536961605 +0000 UTC m=+0.105569205 container start 7bda33c498675f653347006ec1b4c88a46fdf820f70ef9d5abeab41b12c3595d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Nov 26 11:39:53 compute-0 podman[96919]: 2025-11-26 11:39:53.538675167 +0000 UTC m=+0.107282767 container attach 7bda33c498675f653347006ec1b4c88a46fdf820f70ef9d5abeab41b12c3595d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:39:53 compute-0 podman[96919]: 2025-11-26 11:39:53.449024041 +0000 UTC m=+0.017631652 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:39:53 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 2.c scrub starts
Nov 26 11:39:53 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 2.c scrub ok
Nov 26 11:39:54 compute-0 python3[97013]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 11:39:54 compute-0 python3[97092]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764157193.8280215-37029-95097557901258/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=e359e26d9e42bc107a0de03375144cf8590b6f68 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:39:54 compute-0 eager_wu[96933]: --> passed data devices: 0 physical, 3 LVM
Nov 26 11:39:54 compute-0 eager_wu[96933]: --> relative data size: 1.0
Nov 26 11:39:54 compute-0 eager_wu[96933]: --> All data devices are unavailable
Nov 26 11:39:54 compute-0 ceph-mon[74928]: from='client.14244 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:39:54 compute-0 ceph-mon[74928]: Saving service rgw.rgw spec with placement compute-0
Nov 26 11:39:54 compute-0 ceph-mon[74928]: osdmap e30: 3 total, 3 up, 3 in
Nov 26 11:39:54 compute-0 ceph-mon[74928]: pgmap v57: 131 pgs: 18 peering, 113 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:39:54 compute-0 ceph-mon[74928]: 2.c scrub starts
Nov 26 11:39:54 compute-0 ceph-mon[74928]: 2.c scrub ok
Nov 26 11:39:54 compute-0 systemd[1]: libpod-7bda33c498675f653347006ec1b4c88a46fdf820f70ef9d5abeab41b12c3595d.scope: Deactivated successfully.
Nov 26 11:39:54 compute-0 podman[96919]: 2025-11-26 11:39:54.377094369 +0000 UTC m=+0.945701969 container died 7bda33c498675f653347006ec1b4c88a46fdf820f70ef9d5abeab41b12c3595d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:39:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-ebeca430333fb60c4e5978e39a4235e909665065ffb4ef4cadea83db64c7d6a7-merged.mount: Deactivated successfully.
Nov 26 11:39:54 compute-0 podman[96919]: 2025-11-26 11:39:54.405413609 +0000 UTC m=+0.974021210 container remove 7bda33c498675f653347006ec1b4c88a46fdf820f70ef9d5abeab41b12c3595d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:39:54 compute-0 systemd[1]: libpod-conmon-7bda33c498675f653347006ec1b4c88a46fdf820f70ef9d5abeab41b12c3595d.scope: Deactivated successfully.
Nov 26 11:39:54 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Nov 26 11:39:54 compute-0 sudo[96814]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:54 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Nov 26 11:39:54 compute-0 sudo[97143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:39:54 compute-0 sudo[97143]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:54 compute-0 sudo[97143]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:54 compute-0 sudo[97168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:39:54 compute-0 sudo[97168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:54 compute-0 sudo[97168]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:54 compute-0 sudo[97193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:39:54 compute-0 sudo[97193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:54 compute-0 sudo[97193]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:54 compute-0 sudo[97248]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idpqfldanmwulxqhdsldxthgshqclnzk ; /usr/bin/python3'
Nov 26 11:39:54 compute-0 sudo[97248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:39:54 compute-0 sudo[97235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -- lvm list --format json
Nov 26 11:39:54 compute-0 sudo[97235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:54 compute-0 python3[97266]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 '
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:39:54 compute-0 podman[97271]: 2025-11-26 11:39:54.734718302 +0000 UTC m=+0.027933325 container create 53f0790c15f837f658682123f43f92bd77d6f54a1a1f849b48a3e17ef51df6af (image=quay.io/ceph/ceph:v18, name=blissful_northcutt, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 26 11:39:54 compute-0 systemd[1]: Started libpod-conmon-53f0790c15f837f658682123f43f92bd77d6f54a1a1f849b48a3e17ef51df6af.scope.
Nov 26 11:39:54 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:39:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/612c166c8a2b02842c2834c9093cf6d06bb94ea915862300297430d7d0ee23f8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/612c166c8a2b02842c2834c9093cf6d06bb94ea915862300297430d7d0ee23f8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/612c166c8a2b02842c2834c9093cf6d06bb94ea915862300297430d7d0ee23f8/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:54 compute-0 podman[97271]: 2025-11-26 11:39:54.78924521 +0000 UTC m=+0.082460233 container init 53f0790c15f837f658682123f43f92bd77d6f54a1a1f849b48a3e17ef51df6af (image=quay.io/ceph/ceph:v18, name=blissful_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:39:54 compute-0 podman[97271]: 2025-11-26 11:39:54.794688828 +0000 UTC m=+0.087903841 container start 53f0790c15f837f658682123f43f92bd77d6f54a1a1f849b48a3e17ef51df6af (image=quay.io/ceph/ceph:v18, name=blissful_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 26 11:39:54 compute-0 podman[97271]: 2025-11-26 11:39:54.796270091 +0000 UTC m=+0.089485104 container attach 53f0790c15f837f658682123f43f92bd77d6f54a1a1f849b48a3e17ef51df6af (image=quay.io/ceph/ceph:v18, name=blissful_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:39:54 compute-0 podman[97271]: 2025-11-26 11:39:54.723512052 +0000 UTC m=+0.016727085 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:39:54 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Nov 26 11:39:54 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Nov 26 11:39:54 compute-0 podman[97316]: 2025-11-26 11:39:54.876303283 +0000 UTC m=+0.027093810 container create 42ede5f74e6f4023b0f476cc49daae47117eaef60e1d25535fd2a653f64168d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:39:54 compute-0 systemd[1]: Started libpod-conmon-42ede5f74e6f4023b0f476cc49daae47117eaef60e1d25535fd2a653f64168d1.scope.
Nov 26 11:39:54 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:39:54 compute-0 podman[97316]: 2025-11-26 11:39:54.919920031 +0000 UTC m=+0.070710557 container init 42ede5f74e6f4023b0f476cc49daae47117eaef60e1d25535fd2a653f64168d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Nov 26 11:39:54 compute-0 podman[97316]: 2025-11-26 11:39:54.925857701 +0000 UTC m=+0.076648228 container start 42ede5f74e6f4023b0f476cc49daae47117eaef60e1d25535fd2a653f64168d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_easley, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:39:54 compute-0 podman[97316]: 2025-11-26 11:39:54.926952247 +0000 UTC m=+0.077742773 container attach 42ede5f74e6f4023b0f476cc49daae47117eaef60e1d25535fd2a653f64168d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_easley, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:39:54 compute-0 admiring_easley[97329]: 167 167
Nov 26 11:39:54 compute-0 systemd[1]: libpod-42ede5f74e6f4023b0f476cc49daae47117eaef60e1d25535fd2a653f64168d1.scope: Deactivated successfully.
Nov 26 11:39:54 compute-0 conmon[97329]: conmon 42ede5f74e6f4023b0f4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-42ede5f74e6f4023b0f476cc49daae47117eaef60e1d25535fd2a653f64168d1.scope/container/memory.events
Nov 26 11:39:54 compute-0 podman[97316]: 2025-11-26 11:39:54.929596645 +0000 UTC m=+0.080387171 container died 42ede5f74e6f4023b0f476cc49daae47117eaef60e1d25535fd2a653f64168d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_easley, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:39:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e6f17f3618c86f16fbc07a43f19c88261a5b89c3db8fe916c42d2742d52b7e9-merged.mount: Deactivated successfully.
Nov 26 11:39:54 compute-0 podman[97316]: 2025-11-26 11:39:54.947276656 +0000 UTC m=+0.098067181 container remove 42ede5f74e6f4023b0f476cc49daae47117eaef60e1d25535fd2a653f64168d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_easley, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 11:39:54 compute-0 podman[97316]: 2025-11-26 11:39:54.864542358 +0000 UTC m=+0.015332904 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:39:54 compute-0 systemd[1]: libpod-conmon-42ede5f74e6f4023b0f476cc49daae47117eaef60e1d25535fd2a653f64168d1.scope: Deactivated successfully.
Nov 26 11:39:55 compute-0 podman[97352]: 2025-11-26 11:39:55.059890317 +0000 UTC m=+0.027815240 container create 08414181c9c6df7aa88861f71526f3b9cb632f813de85f32adaac61c45d0480b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 26 11:39:55 compute-0 systemd[1]: Started libpod-conmon-08414181c9c6df7aa88861f71526f3b9cb632f813de85f32adaac61c45d0480b.scope.
Nov 26 11:39:55 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:39:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af21fc5c8c03c1ecd7de26f58750cbe0fac37e1d7d00595c51dcc00328bde0c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af21fc5c8c03c1ecd7de26f58750cbe0fac37e1d7d00595c51dcc00328bde0c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af21fc5c8c03c1ecd7de26f58750cbe0fac37e1d7d00595c51dcc00328bde0c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af21fc5c8c03c1ecd7de26f58750cbe0fac37e1d7d00595c51dcc00328bde0c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:55 compute-0 podman[97352]: 2025-11-26 11:39:55.122592567 +0000 UTC m=+0.090517500 container init 08414181c9c6df7aa88861f71526f3b9cb632f813de85f32adaac61c45d0480b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shamir, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:39:55 compute-0 podman[97352]: 2025-11-26 11:39:55.128169918 +0000 UTC m=+0.096094841 container start 08414181c9c6df7aa88861f71526f3b9cb632f813de85f32adaac61c45d0480b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 26 11:39:55 compute-0 podman[97352]: 2025-11-26 11:39:55.129600927 +0000 UTC m=+0.097525860 container attach 08414181c9c6df7aa88861f71526f3b9cb632f813de85f32adaac61c45d0480b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shamir, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:39:55 compute-0 podman[97352]: 2025-11-26 11:39:55.048378412 +0000 UTC m=+0.016303355 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:39:55 compute-0 ceph-mgr[75197]: log_channel(audit) log [DBG] : from='client.14246 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:39:55 compute-0 ceph-mgr[75197]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Nov 26 11:39:55 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0) v1
Nov 26 11:39:55 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Nov 26 11:39:55 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0) v1
Nov 26 11:39:55 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Nov 26 11:39:55 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0) v1
Nov 26 11:39:55 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Nov 26 11:39:55 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Nov 26 11:39:55 compute-0 ceph-mon[74928]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 26 11:39:55 compute-0 ceph-mon[74928]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Nov 26 11:39:55 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mon-compute-0[74924]: 2025-11-26T11:39:55.252+0000 7f695aeb2640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 26 11:39:55 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Nov 26 11:39:55 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).mds e2 new map
Nov 26 11:39:55 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).mds e2 print_map
                                           e2
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-26T11:39:55.253383+0000
                                           modified        2025-11-26T11:39:55.253442+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                            
                                            
Nov 26 11:39:55 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Nov 26 11:39:55 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Nov 26 11:39:55 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Nov 26 11:39:55 compute-0 ceph-mgr[75197]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Nov 26 11:39:55 compute-0 ceph-mgr[75197]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Nov 26 11:39:55 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 26 11:39:55 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:55 compute-0 ceph-mgr[75197]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Nov 26 11:39:55 compute-0 systemd[1]: libpod-53f0790c15f837f658682123f43f92bd77d6f54a1a1f849b48a3e17ef51df6af.scope: Deactivated successfully.
Nov 26 11:39:55 compute-0 conmon[97301]: conmon 53f0790c15f837f65868 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-53f0790c15f837f658682123f43f92bd77d6f54a1a1f849b48a3e17ef51df6af.scope/container/memory.events
Nov 26 11:39:55 compute-0 podman[97391]: 2025-11-26 11:39:55.32716107 +0000 UTC m=+0.016541554 container died 53f0790c15f837f658682123f43f92bd77d6f54a1a1f849b48a3e17ef51df6af (image=quay.io/ceph/ceph:v18, name=blissful_northcutt, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 26 11:39:55 compute-0 podman[97391]: 2025-11-26 11:39:55.346371618 +0000 UTC m=+0.035752102 container remove 53f0790c15f837f658682123f43f92bd77d6f54a1a1f849b48a3e17ef51df6af (image=quay.io/ceph/ceph:v18, name=blissful_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:39:55 compute-0 systemd[1]: libpod-conmon-53f0790c15f837f658682123f43f92bd77d6f54a1a1f849b48a3e17ef51df6af.scope: Deactivated successfully.
Nov 26 11:39:55 compute-0 ceph-mon[74928]: 3.2 scrub starts
Nov 26 11:39:55 compute-0 ceph-mon[74928]: 3.2 scrub ok
Nov 26 11:39:55 compute-0 ceph-mon[74928]: 4.3 scrub starts
Nov 26 11:39:55 compute-0 ceph-mon[74928]: 4.3 scrub ok
Nov 26 11:39:55 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Nov 26 11:39:55 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Nov 26 11:39:55 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Nov 26 11:39:55 compute-0 ceph-mon[74928]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 26 11:39:55 compute-0 ceph-mon[74928]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Nov 26 11:39:55 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Nov 26 11:39:55 compute-0 ceph-mon[74928]: osdmap e31: 3 total, 3 up, 3 in
Nov 26 11:39:55 compute-0 ceph-mon[74928]: fsmap cephfs:0
Nov 26 11:39:55 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:55 compute-0 sudo[97248]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:55 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v59: 131 pgs: 18 peering, 113 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:39:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-612c166c8a2b02842c2834c9093cf6d06bb94ea915862300297430d7d0ee23f8-merged.mount: Deactivated successfully.
Nov 26 11:39:55 compute-0 sudo[97426]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjiverhpcubudazkkrfsywdssdmgcpdn ; /usr/bin/python3'
Nov 26 11:39:55 compute-0 sudo[97426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:39:55 compute-0 python3[97428]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:39:55 compute-0 podman[97429]: 2025-11-26 11:39:55.628553479 +0000 UTC m=+0.028074389 container create a6008f80d48bc2e6b98ae9b5f620708d4c2da980bd5d04fcd12cb8fd32bd4045 (image=quay.io/ceph/ceph:v18, name=strange_carson, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:39:55 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 2.e scrub starts
Nov 26 11:39:55 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 2.e scrub ok
Nov 26 11:39:55 compute-0 systemd[1]: Started libpod-conmon-a6008f80d48bc2e6b98ae9b5f620708d4c2da980bd5d04fcd12cb8fd32bd4045.scope.
Nov 26 11:39:55 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:39:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3283b78cc641f5d0189fb663e0a9b6e0607b7fa8be80dd77c948ae5254e847a1/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3283b78cc641f5d0189fb663e0a9b6e0607b7fa8be80dd77c948ae5254e847a1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3283b78cc641f5d0189fb663e0a9b6e0607b7fa8be80dd77c948ae5254e847a1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:55 compute-0 podman[97429]: 2025-11-26 11:39:55.690012032 +0000 UTC m=+0.089532963 container init a6008f80d48bc2e6b98ae9b5f620708d4c2da980bd5d04fcd12cb8fd32bd4045 (image=quay.io/ceph/ceph:v18, name=strange_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:39:55 compute-0 podman[97429]: 2025-11-26 11:39:55.694015053 +0000 UTC m=+0.093535965 container start a6008f80d48bc2e6b98ae9b5f620708d4c2da980bd5d04fcd12cb8fd32bd4045 (image=quay.io/ceph/ceph:v18, name=strange_carson, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 26 11:39:55 compute-0 podman[97429]: 2025-11-26 11:39:55.695356334 +0000 UTC m=+0.094877244 container attach a6008f80d48bc2e6b98ae9b5f620708d4c2da980bd5d04fcd12cb8fd32bd4045 (image=quay.io/ceph/ceph:v18, name=strange_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:39:55 compute-0 podman[97429]: 2025-11-26 11:39:55.616772966 +0000 UTC m=+0.016293898 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:39:55 compute-0 charming_shamir[97384]: {
Nov 26 11:39:55 compute-0 charming_shamir[97384]:     "0": [
Nov 26 11:39:55 compute-0 charming_shamir[97384]:         {
Nov 26 11:39:55 compute-0 charming_shamir[97384]:             "devices": [
Nov 26 11:39:55 compute-0 charming_shamir[97384]:                 "/dev/loop3"
Nov 26 11:39:55 compute-0 charming_shamir[97384]:             ],
Nov 26 11:39:55 compute-0 charming_shamir[97384]:             "lv_name": "ceph_lv0",
Nov 26 11:39:55 compute-0 charming_shamir[97384]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:39:55 compute-0 charming_shamir[97384]:             "lv_size": "21470642176",
Nov 26 11:39:55 compute-0 charming_shamir[97384]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a9ad59a0-aa2e-4d92-b571-519d2d145b6a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:39:55 compute-0 charming_shamir[97384]:             "lv_uuid": "Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ",
Nov 26 11:39:55 compute-0 charming_shamir[97384]:             "name": "ceph_lv0",
Nov 26 11:39:55 compute-0 charming_shamir[97384]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:39:55 compute-0 charming_shamir[97384]:             "tags": {
Nov 26 11:39:55 compute-0 charming_shamir[97384]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:39:55 compute-0 charming_shamir[97384]:                 "ceph.block_uuid": "Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ",
Nov 26 11:39:55 compute-0 charming_shamir[97384]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:39:55 compute-0 charming_shamir[97384]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:39:55 compute-0 charming_shamir[97384]:                 "ceph.cluster_name": "ceph",
Nov 26 11:39:55 compute-0 charming_shamir[97384]:                 "ceph.crush_device_class": "",
Nov 26 11:39:55 compute-0 charming_shamir[97384]:                 "ceph.encrypted": "0",
Nov 26 11:39:55 compute-0 charming_shamir[97384]:                 "ceph.osd_fsid": "a9ad59a0-aa2e-4d92-b571-519d2d145b6a",
Nov 26 11:39:55 compute-0 charming_shamir[97384]:                 "ceph.osd_id": "0",
Nov 26 11:39:55 compute-0 charming_shamir[97384]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:39:55 compute-0 charming_shamir[97384]:                 "ceph.type": "block",
Nov 26 11:39:55 compute-0 charming_shamir[97384]:                 "ceph.vdo": "0"
Nov 26 11:39:55 compute-0 charming_shamir[97384]:             },
Nov 26 11:39:55 compute-0 charming_shamir[97384]:             "type": "block",
Nov 26 11:39:55 compute-0 charming_shamir[97384]:             "vg_name": "ceph_vg0"
Nov 26 11:39:55 compute-0 charming_shamir[97384]:         }
Nov 26 11:39:55 compute-0 charming_shamir[97384]:     ],
Nov 26 11:39:55 compute-0 charming_shamir[97384]:     "1": [
Nov 26 11:39:55 compute-0 charming_shamir[97384]:         {
Nov 26 11:39:55 compute-0 charming_shamir[97384]:             "devices": [
Nov 26 11:39:55 compute-0 charming_shamir[97384]:                 "/dev/loop4"
Nov 26 11:39:55 compute-0 charming_shamir[97384]:             ],
Nov 26 11:39:55 compute-0 charming_shamir[97384]:             "lv_name": "ceph_lv1",
Nov 26 11:39:55 compute-0 charming_shamir[97384]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:39:55 compute-0 charming_shamir[97384]:             "lv_size": "21470642176",
Nov 26 11:39:55 compute-0 charming_shamir[97384]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2627095b-eef8-4027-bfef-68bf7cb6801f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:39:55 compute-0 charming_shamir[97384]:             "lv_uuid": "T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS",
Nov 26 11:39:55 compute-0 charming_shamir[97384]:             "name": "ceph_lv1",
Nov 26 11:39:55 compute-0 charming_shamir[97384]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:39:55 compute-0 charming_shamir[97384]:             "tags": {
Nov 26 11:39:55 compute-0 charming_shamir[97384]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:39:55 compute-0 charming_shamir[97384]:                 "ceph.block_uuid": "T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS",
Nov 26 11:39:55 compute-0 charming_shamir[97384]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:39:55 compute-0 charming_shamir[97384]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:39:55 compute-0 charming_shamir[97384]:                 "ceph.cluster_name": "ceph",
Nov 26 11:39:55 compute-0 charming_shamir[97384]:                 "ceph.crush_device_class": "",
Nov 26 11:39:55 compute-0 charming_shamir[97384]:                 "ceph.encrypted": "0",
Nov 26 11:39:55 compute-0 charming_shamir[97384]:                 "ceph.osd_fsid": "2627095b-eef8-4027-bfef-68bf7cb6801f",
Nov 26 11:39:55 compute-0 charming_shamir[97384]:                 "ceph.osd_id": "1",
Nov 26 11:39:55 compute-0 charming_shamir[97384]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:39:55 compute-0 charming_shamir[97384]:                 "ceph.type": "block",
Nov 26 11:39:55 compute-0 charming_shamir[97384]:                 "ceph.vdo": "0"
Nov 26 11:39:55 compute-0 charming_shamir[97384]:             },
Nov 26 11:39:55 compute-0 charming_shamir[97384]:             "type": "block",
Nov 26 11:39:55 compute-0 charming_shamir[97384]:             "vg_name": "ceph_vg1"
Nov 26 11:39:55 compute-0 charming_shamir[97384]:         }
Nov 26 11:39:55 compute-0 charming_shamir[97384]:     ],
Nov 26 11:39:55 compute-0 charming_shamir[97384]:     "2": [
Nov 26 11:39:55 compute-0 charming_shamir[97384]:         {
Nov 26 11:39:55 compute-0 charming_shamir[97384]:             "devices": [
Nov 26 11:39:55 compute-0 charming_shamir[97384]:                 "/dev/loop5"
Nov 26 11:39:55 compute-0 charming_shamir[97384]:             ],
Nov 26 11:39:55 compute-0 charming_shamir[97384]:             "lv_name": "ceph_lv2",
Nov 26 11:39:55 compute-0 charming_shamir[97384]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:39:55 compute-0 charming_shamir[97384]:             "lv_size": "21470642176",
Nov 26 11:39:55 compute-0 charming_shamir[97384]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d56156fb-7361-4bef-b06b-1320109b4323,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:39:55 compute-0 charming_shamir[97384]:             "lv_uuid": "PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF",
Nov 26 11:39:55 compute-0 charming_shamir[97384]:             "name": "ceph_lv2",
Nov 26 11:39:55 compute-0 charming_shamir[97384]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:39:55 compute-0 charming_shamir[97384]:             "tags": {
Nov 26 11:39:55 compute-0 charming_shamir[97384]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:39:55 compute-0 charming_shamir[97384]:                 "ceph.block_uuid": "PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF",
Nov 26 11:39:55 compute-0 charming_shamir[97384]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:39:55 compute-0 charming_shamir[97384]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:39:55 compute-0 charming_shamir[97384]:                 "ceph.cluster_name": "ceph",
Nov 26 11:39:55 compute-0 charming_shamir[97384]:                 "ceph.crush_device_class": "",
Nov 26 11:39:55 compute-0 charming_shamir[97384]:                 "ceph.encrypted": "0",
Nov 26 11:39:55 compute-0 charming_shamir[97384]:                 "ceph.osd_fsid": "d56156fb-7361-4bef-b06b-1320109b4323",
Nov 26 11:39:55 compute-0 charming_shamir[97384]:                 "ceph.osd_id": "2",
Nov 26 11:39:55 compute-0 charming_shamir[97384]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:39:55 compute-0 charming_shamir[97384]:                 "ceph.type": "block",
Nov 26 11:39:55 compute-0 charming_shamir[97384]:                 "ceph.vdo": "0"
Nov 26 11:39:55 compute-0 charming_shamir[97384]:             },
Nov 26 11:39:55 compute-0 charming_shamir[97384]:             "type": "block",
Nov 26 11:39:55 compute-0 charming_shamir[97384]:             "vg_name": "ceph_vg2"
Nov 26 11:39:55 compute-0 charming_shamir[97384]:         }
Nov 26 11:39:55 compute-0 charming_shamir[97384]:     ]
Nov 26 11:39:55 compute-0 charming_shamir[97384]: }
Nov 26 11:39:55 compute-0 systemd[1]: libpod-08414181c9c6df7aa88861f71526f3b9cb632f813de85f32adaac61c45d0480b.scope: Deactivated successfully.
Nov 26 11:39:55 compute-0 podman[97352]: 2025-11-26 11:39:55.779841395 +0000 UTC m=+0.747766328 container died 08414181c9c6df7aa88861f71526f3b9cb632f813de85f32adaac61c45d0480b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shamir, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:39:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-af21fc5c8c03c1ecd7de26f58750cbe0fac37e1d7d00595c51dcc00328bde0c5-merged.mount: Deactivated successfully.
Nov 26 11:39:55 compute-0 podman[97352]: 2025-11-26 11:39:55.811879489 +0000 UTC m=+0.779804413 container remove 08414181c9c6df7aa88861f71526f3b9cb632f813de85f32adaac61c45d0480b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:39:55 compute-0 sudo[97235]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:55 compute-0 systemd[1]: libpod-conmon-08414181c9c6df7aa88861f71526f3b9cb632f813de85f32adaac61c45d0480b.scope: Deactivated successfully.
Nov 26 11:39:55 compute-0 sudo[97456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:39:55 compute-0 sudo[97456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:55 compute-0 sudo[97456]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:55 compute-0 sudo[97481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:39:55 compute-0 sudo[97481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:55 compute-0 sudo[97481]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:55 compute-0 sudo[97506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:39:55 compute-0 sudo[97506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:55 compute-0 sudo[97506]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:56 compute-0 sudo[97550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -- raw list --format json
Nov 26 11:39:56 compute-0 sudo[97550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:56 compute-0 ceph-mgr[75197]: log_channel(audit) log [DBG] : from='client.14248 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:39:56 compute-0 ceph-mgr[75197]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Nov 26 11:39:56 compute-0 ceph-mgr[75197]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Nov 26 11:39:56 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 26 11:39:56 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:56 compute-0 strange_carson[97440]: Scheduled mds.cephfs update...
Nov 26 11:39:56 compute-0 systemd[1]: libpod-a6008f80d48bc2e6b98ae9b5f620708d4c2da980bd5d04fcd12cb8fd32bd4045.scope: Deactivated successfully.
Nov 26 11:39:56 compute-0 podman[97599]: 2025-11-26 11:39:56.186991531 +0000 UTC m=+0.017250912 container died a6008f80d48bc2e6b98ae9b5f620708d4c2da980bd5d04fcd12cb8fd32bd4045 (image=quay.io/ceph/ceph:v18, name=strange_carson, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:39:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-3283b78cc641f5d0189fb663e0a9b6e0607b7fa8be80dd77c948ae5254e847a1-merged.mount: Deactivated successfully.
Nov 26 11:39:56 compute-0 podman[97599]: 2025-11-26 11:39:56.209978584 +0000 UTC m=+0.040237964 container remove a6008f80d48bc2e6b98ae9b5f620708d4c2da980bd5d04fcd12cb8fd32bd4045 (image=quay.io/ceph/ceph:v18, name=strange_carson, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 26 11:39:56 compute-0 systemd[1]: libpod-conmon-a6008f80d48bc2e6b98ae9b5f620708d4c2da980bd5d04fcd12cb8fd32bd4045.scope: Deactivated successfully.
Nov 26 11:39:56 compute-0 sudo[97426]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:56 compute-0 podman[97620]: 2025-11-26 11:39:56.257683173 +0000 UTC m=+0.028324571 container create d2a9a2ce3b8ea31efa190972268ea25c1cb91bc973edce4c944a77f532f1a7e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_leakey, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:39:56 compute-0 systemd[1]: Started libpod-conmon-d2a9a2ce3b8ea31efa190972268ea25c1cb91bc973edce4c944a77f532f1a7e9.scope.
Nov 26 11:39:56 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:39:56 compute-0 podman[97620]: 2025-11-26 11:39:56.30894395 +0000 UTC m=+0.079585369 container init d2a9a2ce3b8ea31efa190972268ea25c1cb91bc973edce4c944a77f532f1a7e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_leakey, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:39:56 compute-0 podman[97620]: 2025-11-26 11:39:56.313689661 +0000 UTC m=+0.084331060 container start d2a9a2ce3b8ea31efa190972268ea25c1cb91bc973edce4c944a77f532f1a7e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:39:56 compute-0 podman[97620]: 2025-11-26 11:39:56.314756053 +0000 UTC m=+0.085397452 container attach d2a9a2ce3b8ea31efa190972268ea25c1cb91bc973edce4c944a77f532f1a7e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:39:56 compute-0 sad_leakey[97634]: 167 167
Nov 26 11:39:56 compute-0 systemd[1]: libpod-d2a9a2ce3b8ea31efa190972268ea25c1cb91bc973edce4c944a77f532f1a7e9.scope: Deactivated successfully.
Nov 26 11:39:56 compute-0 podman[97620]: 2025-11-26 11:39:56.31667309 +0000 UTC m=+0.087314488 container died d2a9a2ce3b8ea31efa190972268ea25c1cb91bc973edce4c944a77f532f1a7e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_leakey, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:39:56 compute-0 podman[97620]: 2025-11-26 11:39:56.33817461 +0000 UTC m=+0.108816008 container remove d2a9a2ce3b8ea31efa190972268ea25c1cb91bc973edce4c944a77f532f1a7e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_leakey, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:39:56 compute-0 podman[97620]: 2025-11-26 11:39:56.246527489 +0000 UTC m=+0.017168908 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:39:56 compute-0 systemd[1]: libpod-conmon-d2a9a2ce3b8ea31efa190972268ea25c1cb91bc973edce4c944a77f532f1a7e9.scope: Deactivated successfully.
Nov 26 11:39:56 compute-0 ceph-mon[74928]: from='client.14246 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:39:56 compute-0 ceph-mon[74928]: Saving service mds.cephfs spec with placement compute-0
Nov 26 11:39:56 compute-0 ceph-mon[74928]: pgmap v59: 131 pgs: 18 peering, 113 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:39:56 compute-0 ceph-mon[74928]: 2.e scrub starts
Nov 26 11:39:56 compute-0 ceph-mon[74928]: 2.e scrub ok
Nov 26 11:39:56 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-f1aa4f920d213a4eccfa3c50cec90caf84138d148aa5e9f34a8b8b3fb41ccc57-merged.mount: Deactivated successfully.
Nov 26 11:39:56 compute-0 podman[97656]: 2025-11-26 11:39:56.446966701 +0000 UTC m=+0.026198229 container create 523cb271dbd3e6b4d35b9548205ee675aa7ad51136021786708378f08b8048aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_haibt, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:39:56 compute-0 systemd[1]: Started libpod-conmon-523cb271dbd3e6b4d35b9548205ee675aa7ad51136021786708378f08b8048aa.scope.
Nov 26 11:39:56 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:39:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09140c57e3bc1a3c9c8dc468976337d15de22ecb12d0cdbcd5192a0f08c0b4dd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09140c57e3bc1a3c9c8dc468976337d15de22ecb12d0cdbcd5192a0f08c0b4dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09140c57e3bc1a3c9c8dc468976337d15de22ecb12d0cdbcd5192a0f08c0b4dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09140c57e3bc1a3c9c8dc468976337d15de22ecb12d0cdbcd5192a0f08c0b4dd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:56 compute-0 podman[97656]: 2025-11-26 11:39:56.504825876 +0000 UTC m=+0.084057404 container init 523cb271dbd3e6b4d35b9548205ee675aa7ad51136021786708378f08b8048aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_haibt, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:39:56 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:39:56 compute-0 podman[97656]: 2025-11-26 11:39:56.509971031 +0000 UTC m=+0.089202560 container start 523cb271dbd3e6b4d35b9548205ee675aa7ad51136021786708378f08b8048aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 26 11:39:56 compute-0 podman[97656]: 2025-11-26 11:39:56.511087958 +0000 UTC m=+0.090319486 container attach 523cb271dbd3e6b4d35b9548205ee675aa7ad51136021786708378f08b8048aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_haibt, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 26 11:39:56 compute-0 podman[97656]: 2025-11-26 11:39:56.436468328 +0000 UTC m=+0.015699876 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:39:56 compute-0 sudo[97749]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzsxahecvhpegtwglnfjxrqtgbdffhdh ; /usr/bin/python3'
Nov 26 11:39:56 compute-0 sudo[97749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:39:56 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 2.10 deep-scrub starts
Nov 26 11:39:56 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 2.10 deep-scrub ok
Nov 26 11:39:56 compute-0 python3[97751]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 11:39:56 compute-0 sudo[97749]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:56 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Nov 26 11:39:56 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Nov 26 11:39:56 compute-0 sudo[97822]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpirnqrhwzfpyuqcwfjngndszywpvtlu ; /usr/bin/python3'
Nov 26 11:39:56 compute-0 sudo[97822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:39:57 compute-0 python3[97824]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764157196.535552-37059-64627210907765/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=034e11544e08d3f6c57ef0872ea08ff526a4e1ef backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:39:57 compute-0 sudo[97822]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:57 compute-0 agitated_haibt[97669]: {
Nov 26 11:39:57 compute-0 agitated_haibt[97669]:     "2627095b-eef8-4027-bfef-68bf7cb6801f": {
Nov 26 11:39:57 compute-0 agitated_haibt[97669]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:39:57 compute-0 agitated_haibt[97669]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 11:39:57 compute-0 agitated_haibt[97669]:         "osd_id": 1,
Nov 26 11:39:57 compute-0 agitated_haibt[97669]:         "osd_uuid": "2627095b-eef8-4027-bfef-68bf7cb6801f",
Nov 26 11:39:57 compute-0 agitated_haibt[97669]:         "type": "bluestore"
Nov 26 11:39:57 compute-0 agitated_haibt[97669]:     },
Nov 26 11:39:57 compute-0 agitated_haibt[97669]:     "a9ad59a0-aa2e-4d92-b571-519d2d145b6a": {
Nov 26 11:39:57 compute-0 agitated_haibt[97669]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:39:57 compute-0 agitated_haibt[97669]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 11:39:57 compute-0 agitated_haibt[97669]:         "osd_id": 0,
Nov 26 11:39:57 compute-0 agitated_haibt[97669]:         "osd_uuid": "a9ad59a0-aa2e-4d92-b571-519d2d145b6a",
Nov 26 11:39:57 compute-0 agitated_haibt[97669]:         "type": "bluestore"
Nov 26 11:39:57 compute-0 agitated_haibt[97669]:     },
Nov 26 11:39:57 compute-0 agitated_haibt[97669]:     "d56156fb-7361-4bef-b06b-1320109b4323": {
Nov 26 11:39:57 compute-0 agitated_haibt[97669]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:39:57 compute-0 agitated_haibt[97669]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 11:39:57 compute-0 agitated_haibt[97669]:         "osd_id": 2,
Nov 26 11:39:57 compute-0 agitated_haibt[97669]:         "osd_uuid": "d56156fb-7361-4bef-b06b-1320109b4323",
Nov 26 11:39:57 compute-0 agitated_haibt[97669]:         "type": "bluestore"
Nov 26 11:39:57 compute-0 agitated_haibt[97669]:     }
Nov 26 11:39:57 compute-0 agitated_haibt[97669]: }
Nov 26 11:39:57 compute-0 sudo[97898]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amhqkiegxkjxnrpxlcstfkqnlorzarje ; /usr/bin/python3'
Nov 26 11:39:57 compute-0 sudo[97898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:39:57 compute-0 systemd[1]: libpod-523cb271dbd3e6b4d35b9548205ee675aa7ad51136021786708378f08b8048aa.scope: Deactivated successfully.
Nov 26 11:39:57 compute-0 podman[97656]: 2025-11-26 11:39:57.306535349 +0000 UTC m=+0.885766877 container died 523cb271dbd3e6b4d35b9548205ee675aa7ad51136021786708378f08b8048aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_haibt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef)
Nov 26 11:39:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-09140c57e3bc1a3c9c8dc468976337d15de22ecb12d0cdbcd5192a0f08c0b4dd-merged.mount: Deactivated successfully.
Nov 26 11:39:57 compute-0 podman[97656]: 2025-11-26 11:39:57.340010586 +0000 UTC m=+0.919242114 container remove 523cb271dbd3e6b4d35b9548205ee675aa7ad51136021786708378f08b8048aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_haibt, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 26 11:39:57 compute-0 systemd[1]: libpod-conmon-523cb271dbd3e6b4d35b9548205ee675aa7ad51136021786708378f08b8048aa.scope: Deactivated successfully.
Nov 26 11:39:57 compute-0 sudo[97550]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:57 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 11:39:57 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:57 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 11:39:57 compute-0 ceph-mon[74928]: from='client.14248 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:39:57 compute-0 ceph-mon[74928]: Saving service mds.cephfs spec with placement compute-0
Nov 26 11:39:57 compute-0 ceph-mon[74928]: 2.10 deep-scrub starts
Nov 26 11:39:57 compute-0 ceph-mon[74928]: 2.10 deep-scrub ok
Nov 26 11:39:57 compute-0 ceph-mon[74928]: 4.6 scrub starts
Nov 26 11:39:57 compute-0 ceph-mon[74928]: 4.6 scrub ok
Nov 26 11:39:57 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:57 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:57 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v60: 131 pgs: 18 peering, 113 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:39:57 compute-0 sudo[97912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:39:57 compute-0 sudo[97912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:57 compute-0 python3[97902]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:39:57 compute-0 sudo[97912]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:57 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Nov 26 11:39:57 compute-0 podman[97937]: 2025-11-26 11:39:57.452366712 +0000 UTC m=+0.028131877 container create c3d1f7f7a53eb0d97eaa28a2d8dd35093d200a1f2acd058c82534aaeed6f5524 (image=quay.io/ceph/ceph:v18, name=serene_proskuriakova, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 26 11:39:57 compute-0 sudo[97938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 26 11:39:57 compute-0 sudo[97938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:57 compute-0 sudo[97938]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:57 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Nov 26 11:39:57 compute-0 systemd[1]: Started libpod-conmon-c3d1f7f7a53eb0d97eaa28a2d8dd35093d200a1f2acd058c82534aaeed6f5524.scope.
Nov 26 11:39:57 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:39:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdfd047440367bea7a6da545c71a28fd250a2dc0225dcb5eaf118cfec4cff8ad/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdfd047440367bea7a6da545c71a28fd250a2dc0225dcb5eaf118cfec4cff8ad/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:57 compute-0 podman[97937]: 2025-11-26 11:39:57.493745818 +0000 UTC m=+0.069511004 container init c3d1f7f7a53eb0d97eaa28a2d8dd35093d200a1f2acd058c82534aaeed6f5524 (image=quay.io/ceph/ceph:v18, name=serene_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Nov 26 11:39:57 compute-0 podman[97937]: 2025-11-26 11:39:57.498761659 +0000 UTC m=+0.074526826 container start c3d1f7f7a53eb0d97eaa28a2d8dd35093d200a1f2acd058c82534aaeed6f5524 (image=quay.io/ceph/ceph:v18, name=serene_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:39:57 compute-0 podman[97937]: 2025-11-26 11:39:57.499883496 +0000 UTC m=+0.075648662 container attach c3d1f7f7a53eb0d97eaa28a2d8dd35093d200a1f2acd058c82534aaeed6f5524 (image=quay.io/ceph/ceph:v18, name=serene_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 26 11:39:57 compute-0 sudo[97974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:39:57 compute-0 sudo[97974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:57 compute-0 sudo[97974]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:57 compute-0 podman[97937]: 2025-11-26 11:39:57.441009718 +0000 UTC m=+0.016774905 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:39:57 compute-0 sudo[98003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:39:57 compute-0 sudo[98003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:57 compute-0 sudo[98003]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:57 compute-0 sudo[98028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:39:57 compute-0 sudo[98028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:57 compute-0 sudo[98028]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:57 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Nov 26 11:39:57 compute-0 sudo[98053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 26 11:39:57 compute-0 sudo[98053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:57 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Nov 26 11:39:57 compute-0 podman[98153]: 2025-11-26 11:39:57.96519082 +0000 UTC m=+0.037989624 container exec 810eaed6cbde00330c0646cedaef5c2ae94579236d67ba42b8ef02d055d04ad5 (image=quay.io/ceph/ceph:v18, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 26 11:39:57 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0) v1
Nov 26 11:39:57 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4277303254' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Nov 26 11:39:57 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4277303254' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Nov 26 11:39:58 compute-0 podman[97937]: 2025-11-26 11:39:58.013776581 +0000 UTC m=+0.589541746 container died c3d1f7f7a53eb0d97eaa28a2d8dd35093d200a1f2acd058c82534aaeed6f5524 (image=quay.io/ceph/ceph:v18, name=serene_proskuriakova, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 26 11:39:58 compute-0 systemd[1]: libpod-c3d1f7f7a53eb0d97eaa28a2d8dd35093d200a1f2acd058c82534aaeed6f5524.scope: Deactivated successfully.
Nov 26 11:39:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-bdfd047440367bea7a6da545c71a28fd250a2dc0225dcb5eaf118cfec4cff8ad-merged.mount: Deactivated successfully.
Nov 26 11:39:58 compute-0 podman[97937]: 2025-11-26 11:39:58.07645726 +0000 UTC m=+0.652222425 container remove c3d1f7f7a53eb0d97eaa28a2d8dd35093d200a1f2acd058c82534aaeed6f5524 (image=quay.io/ceph/ceph:v18, name=serene_proskuriakova, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 26 11:39:58 compute-0 podman[98153]: 2025-11-26 11:39:58.079943736 +0000 UTC m=+0.152742530 container exec_died 810eaed6cbde00330c0646cedaef5c2ae94579236d67ba42b8ef02d055d04ad5 (image=quay.io/ceph/ceph:v18, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 26 11:39:58 compute-0 systemd[1]: libpod-conmon-c3d1f7f7a53eb0d97eaa28a2d8dd35093d200a1f2acd058c82534aaeed6f5524.scope: Deactivated successfully.
Nov 26 11:39:58 compute-0 sudo[97898]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:58 compute-0 sudo[98053]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:58 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 11:39:58 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:58 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 11:39:58 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:58 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 11:39:58 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:39:58 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 11:39:58 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 11:39:58 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 11:39:58 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:58 compute-0 ceph-mon[74928]: pgmap v60: 131 pgs: 18 peering, 113 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:39:58 compute-0 ceph-mon[74928]: 3.4 scrub starts
Nov 26 11:39:58 compute-0 ceph-mon[74928]: 3.4 scrub ok
Nov 26 11:39:58 compute-0 ceph-mon[74928]: 2.12 scrub starts
Nov 26 11:39:58 compute-0 ceph-mon[74928]: 2.12 scrub ok
Nov 26 11:39:58 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/4277303254' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Nov 26 11:39:58 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/4277303254' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Nov 26 11:39:58 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:58 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:58 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:39:58 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 11:39:58 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:58 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev ab239874-2e10-4375-837f-f5d0a57d09c8 does not exist
Nov 26 11:39:58 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 9ead6938-b02d-4966-ad11-ada8022bacae does not exist
Nov 26 11:39:58 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 54e4fc9e-8df3-4437-862d-89c78d7367b1 does not exist
Nov 26 11:39:58 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 11:39:58 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 11:39:58 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 11:39:58 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 11:39:58 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 11:39:58 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:39:58 compute-0 sudo[98262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:39:58 compute-0 sudo[98262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:58 compute-0 sudo[98262]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:58 compute-0 sudo[98287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:39:58 compute-0 sudo[98287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:58 compute-0 sudo[98287]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:58 compute-0 sudo[98312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:39:58 compute-0 sudo[98312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:58 compute-0 sudo[98312]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:58 compute-0 sudo[98378]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocplvwqluydwmazgkvotyhgxbbuqauak ; /usr/bin/python3'
Nov 26 11:39:58 compute-0 sudo[98378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:39:58 compute-0 sudo[98342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 26 11:39:58 compute-0 sudo[98342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:39:58 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Nov 26 11:39:58 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Nov 26 11:39:58 compute-0 python3[98385]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:39:58 compute-0 podman[98396]: 2025-11-26 11:39:58.696976752 +0000 UTC m=+0.027076377 container create c86d72e77ce24a09186b496358ecabcb7e0a1e6f949efd6c092fe2d4afe453b5 (image=quay.io/ceph/ceph:v18, name=jovial_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 26 11:39:58 compute-0 systemd[1]: Started libpod-conmon-c86d72e77ce24a09186b496358ecabcb7e0a1e6f949efd6c092fe2d4afe453b5.scope.
Nov 26 11:39:58 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:39:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c99d5627aa1eab82d8d060ea509a82b65c494528270dfc7942c53c656884328/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c99d5627aa1eab82d8d060ea509a82b65c494528270dfc7942c53c656884328/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:58 compute-0 podman[98396]: 2025-11-26 11:39:58.757690149 +0000 UTC m=+0.087789775 container init c86d72e77ce24a09186b496358ecabcb7e0a1e6f949efd6c092fe2d4afe453b5 (image=quay.io/ceph/ceph:v18, name=jovial_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 26 11:39:58 compute-0 podman[98396]: 2025-11-26 11:39:58.762275589 +0000 UTC m=+0.092375215 container start c86d72e77ce24a09186b496358ecabcb7e0a1e6f949efd6c092fe2d4afe453b5 (image=quay.io/ceph/ceph:v18, name=jovial_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:39:58 compute-0 podman[98396]: 2025-11-26 11:39:58.76329367 +0000 UTC m=+0.093393296 container attach c86d72e77ce24a09186b496358ecabcb7e0a1e6f949efd6c092fe2d4afe453b5 (image=quay.io/ceph/ceph:v18, name=jovial_brahmagupta, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 26 11:39:58 compute-0 podman[98396]: 2025-11-26 11:39:58.685456359 +0000 UTC m=+0.015556005 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:39:58 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 4.b scrub starts
Nov 26 11:39:58 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 4.b scrub ok
Nov 26 11:39:58 compute-0 podman[98439]: 2025-11-26 11:39:58.845872154 +0000 UTC m=+0.027217884 container create 423b8ac6425fa71d55a38527e1fb220360a91ade76417a739cffdd82a59aa64e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 26 11:39:58 compute-0 systemd[1]: Started libpod-conmon-423b8ac6425fa71d55a38527e1fb220360a91ade76417a739cffdd82a59aa64e.scope.
Nov 26 11:39:58 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:39:58 compute-0 podman[98439]: 2025-11-26 11:39:58.887418265 +0000 UTC m=+0.068764005 container init 423b8ac6425fa71d55a38527e1fb220360a91ade76417a739cffdd82a59aa64e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:39:58 compute-0 podman[98439]: 2025-11-26 11:39:58.891772028 +0000 UTC m=+0.073117759 container start 423b8ac6425fa71d55a38527e1fb220360a91ade76417a739cffdd82a59aa64e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_rhodes, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 26 11:39:58 compute-0 podman[98439]: 2025-11-26 11:39:58.892768037 +0000 UTC m=+0.074113788 container attach 423b8ac6425fa71d55a38527e1fb220360a91ade76417a739cffdd82a59aa64e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_rhodes, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 26 11:39:58 compute-0 hopeful_rhodes[98452]: 167 167
Nov 26 11:39:58 compute-0 systemd[1]: libpod-423b8ac6425fa71d55a38527e1fb220360a91ade76417a739cffdd82a59aa64e.scope: Deactivated successfully.
Nov 26 11:39:58 compute-0 podman[98439]: 2025-11-26 11:39:58.895263274 +0000 UTC m=+0.076609024 container died 423b8ac6425fa71d55a38527e1fb220360a91ade76417a739cffdd82a59aa64e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_rhodes, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:39:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-c4d569bd6d8d74ec91657a52e796f0961bcb4c29bccdf4d4519a258783ca843f-merged.mount: Deactivated successfully.
Nov 26 11:39:58 compute-0 podman[98439]: 2025-11-26 11:39:58.918755258 +0000 UTC m=+0.100100988 container remove 423b8ac6425fa71d55a38527e1fb220360a91ade76417a739cffdd82a59aa64e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_rhodes, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:39:58 compute-0 podman[98439]: 2025-11-26 11:39:58.833934304 +0000 UTC m=+0.015280054 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:39:58 compute-0 systemd[1]: libpod-conmon-423b8ac6425fa71d55a38527e1fb220360a91ade76417a739cffdd82a59aa64e.scope: Deactivated successfully.
Nov 26 11:39:59 compute-0 podman[98474]: 2025-11-26 11:39:59.030284795 +0000 UTC m=+0.026684558 container create 6deac15b2d7878655d721a727aac25edc371d97dd9327e66d677624e561aafcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_newton, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 26 11:39:59 compute-0 systemd[1]: Started libpod-conmon-6deac15b2d7878655d721a727aac25edc371d97dd9327e66d677624e561aafcc.scope.
Nov 26 11:39:59 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:39:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbae02c9de1a1dc0f671f57a73a3f409d5315c41d6d8f60c7d71915168d3b407/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbae02c9de1a1dc0f671f57a73a3f409d5315c41d6d8f60c7d71915168d3b407/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbae02c9de1a1dc0f671f57a73a3f409d5315c41d6d8f60c7d71915168d3b407/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbae02c9de1a1dc0f671f57a73a3f409d5315c41d6d8f60c7d71915168d3b407/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbae02c9de1a1dc0f671f57a73a3f409d5315c41d6d8f60c7d71915168d3b407/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:59 compute-0 podman[98474]: 2025-11-26 11:39:59.081394487 +0000 UTC m=+0.077794240 container init 6deac15b2d7878655d721a727aac25edc371d97dd9327e66d677624e561aafcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_newton, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 26 11:39:59 compute-0 podman[98474]: 2025-11-26 11:39:59.086813329 +0000 UTC m=+0.083213081 container start 6deac15b2d7878655d721a727aac25edc371d97dd9327e66d677624e561aafcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_newton, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:39:59 compute-0 podman[98474]: 2025-11-26 11:39:59.089823146 +0000 UTC m=+0.086222919 container attach 6deac15b2d7878655d721a727aac25edc371d97dd9327e66d677624e561aafcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_newton, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Nov 26 11:39:59 compute-0 podman[98474]: 2025-11-26 11:39:59.018699561 +0000 UTC m=+0.015099334 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:39:59 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 26 11:39:59 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2212803579' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 26 11:39:59 compute-0 jovial_brahmagupta[98424]: 
Nov 26 11:39:59 compute-0 jovial_brahmagupta[98424]: {"fsid":"ebab460c-3fd7-5f66-aa87-e10c143123f7","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":117,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":31,"num_osds":3,"num_up_osds":3,"osd_up_since":1764157172,"num_in_osds":3,"osd_in_since":1764157152,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":113},{"state_name":"peering","count":18}],"num_pgs":131,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":83767296,"bytes_avail":64328159232,"bytes_total":64411926528,"inactive_pgs_ratio":0.13740457594394684},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-11-26T11:39:43.377103+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Nov 26 11:39:59 compute-0 systemd[1]: libpod-c86d72e77ce24a09186b496358ecabcb7e0a1e6f949efd6c092fe2d4afe453b5.scope: Deactivated successfully.
Nov 26 11:39:59 compute-0 conmon[98424]: conmon c86d72e77ce24a09186b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c86d72e77ce24a09186b496358ecabcb7e0a1e6f949efd6c092fe2d4afe453b5.scope/container/memory.events
Nov 26 11:39:59 compute-0 podman[98396]: 2025-11-26 11:39:59.270265478 +0000 UTC m=+0.600365105 container died c86d72e77ce24a09186b496358ecabcb7e0a1e6f949efd6c092fe2d4afe453b5 (image=quay.io/ceph/ceph:v18, name=jovial_brahmagupta, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True)
Nov 26 11:39:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c99d5627aa1eab82d8d060ea509a82b65c494528270dfc7942c53c656884328-merged.mount: Deactivated successfully.
Nov 26 11:39:59 compute-0 podman[98396]: 2025-11-26 11:39:59.293940347 +0000 UTC m=+0.624039974 container remove c86d72e77ce24a09186b496358ecabcb7e0a1e6f949efd6c092fe2d4afe453b5 (image=quay.io/ceph/ceph:v18, name=jovial_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 26 11:39:59 compute-0 systemd[1]: libpod-conmon-c86d72e77ce24a09186b496358ecabcb7e0a1e6f949efd6c092fe2d4afe453b5.scope: Deactivated successfully.
Nov 26 11:39:59 compute-0 sudo[98378]: pam_unix(sudo:session): session closed for user root
Nov 26 11:39:59 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:39:59 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 11:39:59 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 11:39:59 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:39:59 compute-0 ceph-mon[74928]: 2.14 scrub starts
Nov 26 11:39:59 compute-0 ceph-mon[74928]: 2.14 scrub ok
Nov 26 11:39:59 compute-0 ceph-mon[74928]: 4.b scrub starts
Nov 26 11:39:59 compute-0 ceph-mon[74928]: 4.b scrub ok
Nov 26 11:39:59 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/2212803579' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 26 11:39:59 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v61: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:39:59 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 3.b deep-scrub starts
Nov 26 11:39:59 compute-0 sudo[98545]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rndywpsmsbyxjyszseevnvtamxuypnmd ; /usr/bin/python3'
Nov 26 11:39:59 compute-0 sudo[98545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:39:59 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 3.b deep-scrub ok
Nov 26 11:39:59 compute-0 python3[98547]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:39:59 compute-0 podman[98548]: 2025-11-26 11:39:59.571563959 +0000 UTC m=+0.025767919 container create ec652770a3887038ae7cfcaad29c993ce2254bbdac8ab88e33313f39fb1cb45b (image=quay.io/ceph/ceph:v18, name=youthful_kapitsa, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:39:59 compute-0 systemd[1]: Started libpod-conmon-ec652770a3887038ae7cfcaad29c993ce2254bbdac8ab88e33313f39fb1cb45b.scope.
Nov 26 11:39:59 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:39:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e61cac62594869aaa811e2a7cb830c201be9633c96b4722637890d0098a0de03/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e61cac62594869aaa811e2a7cb830c201be9633c96b4722637890d0098a0de03/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:39:59 compute-0 podman[98548]: 2025-11-26 11:39:59.623990186 +0000 UTC m=+0.078194155 container init ec652770a3887038ae7cfcaad29c993ce2254bbdac8ab88e33313f39fb1cb45b (image=quay.io/ceph/ceph:v18, name=youthful_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:39:59 compute-0 podman[98548]: 2025-11-26 11:39:59.628295376 +0000 UTC m=+0.082499327 container start ec652770a3887038ae7cfcaad29c993ce2254bbdac8ab88e33313f39fb1cb45b (image=quay.io/ceph/ceph:v18, name=youthful_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 11:39:59 compute-0 podman[98548]: 2025-11-26 11:39:59.630722235 +0000 UTC m=+0.084926195 container attach ec652770a3887038ae7cfcaad29c993ce2254bbdac8ab88e33313f39fb1cb45b (image=quay.io/ceph/ceph:v18, name=youthful_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:39:59 compute-0 podman[98548]: 2025-11-26 11:39:59.560995363 +0000 UTC m=+0.015199334 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:39:59 compute-0 jovial_newton[98506]: --> passed data devices: 0 physical, 3 LVM
Nov 26 11:39:59 compute-0 jovial_newton[98506]: --> relative data size: 1.0
Nov 26 11:39:59 compute-0 jovial_newton[98506]: --> All data devices are unavailable
Nov 26 11:39:59 compute-0 systemd[1]: libpod-6deac15b2d7878655d721a727aac25edc371d97dd9327e66d677624e561aafcc.scope: Deactivated successfully.
Nov 26 11:39:59 compute-0 podman[98607]: 2025-11-26 11:39:59.950599442 +0000 UTC m=+0.015606531 container died 6deac15b2d7878655d721a727aac25edc371d97dd9327e66d677624e561aafcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:39:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-cbae02c9de1a1dc0f671f57a73a3f409d5315c41d6d8f60c7d71915168d3b407-merged.mount: Deactivated successfully.
Nov 26 11:39:59 compute-0 podman[98607]: 2025-11-26 11:39:59.978201831 +0000 UTC m=+0.043208919 container remove 6deac15b2d7878655d721a727aac25edc371d97dd9327e66d677624e561aafcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 11:39:59 compute-0 systemd[1]: libpod-conmon-6deac15b2d7878655d721a727aac25edc371d97dd9327e66d677624e561aafcc.scope: Deactivated successfully.
Nov 26 11:40:00 compute-0 sudo[98342]: pam_unix(sudo:session): session closed for user root
Nov 26 11:40:00 compute-0 sudo[98619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:40:00 compute-0 sudo[98619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:40:00 compute-0 sudo[98619]: pam_unix(sudo:session): session closed for user root
Nov 26 11:40:00 compute-0 sudo[98644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:40:00 compute-0 sudo[98644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:40:00 compute-0 sudo[98644]: pam_unix(sudo:session): session closed for user root
Nov 26 11:40:00 compute-0 sudo[98669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:40:00 compute-0 sudo[98669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:40:00 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 26 11:40:00 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1918528740' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 26 11:40:00 compute-0 sudo[98669]: pam_unix(sudo:session): session closed for user root
Nov 26 11:40:00 compute-0 youthful_kapitsa[98560]: 
Nov 26 11:40:00 compute-0 youthful_kapitsa[98560]: {"epoch":1,"fsid":"ebab460c-3fd7-5f66-aa87-e10c143123f7","modified":"2025-11-26T11:37:58.004547Z","created":"2025-11-26T11:37:58.004547Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Nov 26 11:40:00 compute-0 youthful_kapitsa[98560]: dumped monmap epoch 1
Nov 26 11:40:00 compute-0 systemd[1]: libpod-ec652770a3887038ae7cfcaad29c993ce2254bbdac8ab88e33313f39fb1cb45b.scope: Deactivated successfully.
Nov 26 11:40:00 compute-0 conmon[98560]: conmon ec652770a3887038ae7c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ec652770a3887038ae7cfcaad29c993ce2254bbdac8ab88e33313f39fb1cb45b.scope/container/memory.events
Nov 26 11:40:00 compute-0 podman[98548]: 2025-11-26 11:40:00.140295109 +0000 UTC m=+0.594499059 container died ec652770a3887038ae7cfcaad29c993ce2254bbdac8ab88e33313f39fb1cb45b (image=quay.io/ceph/ceph:v18, name=youthful_kapitsa, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:40:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-e61cac62594869aaa811e2a7cb830c201be9633c96b4722637890d0098a0de03-merged.mount: Deactivated successfully.
Nov 26 11:40:00 compute-0 podman[98548]: 2025-11-26 11:40:00.16446368 +0000 UTC m=+0.618667630 container remove ec652770a3887038ae7cfcaad29c993ce2254bbdac8ab88e33313f39fb1cb45b (image=quay.io/ceph/ceph:v18, name=youthful_kapitsa, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 26 11:40:00 compute-0 systemd[1]: libpod-conmon-ec652770a3887038ae7cfcaad29c993ce2254bbdac8ab88e33313f39fb1cb45b.scope: Deactivated successfully.
Nov 26 11:40:00 compute-0 sudo[98696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -- lvm list --format json
Nov 26 11:40:00 compute-0 sudo[98696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:40:00 compute-0 sudo[98545]: pam_unix(sudo:session): session closed for user root
Nov 26 11:40:00 compute-0 ceph-mon[74928]: pgmap v61: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:40:00 compute-0 ceph-mon[74928]: 3.b deep-scrub starts
Nov 26 11:40:00 compute-0 ceph-mon[74928]: 3.b deep-scrub ok
Nov 26 11:40:00 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1918528740' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 26 11:40:00 compute-0 podman[98763]: 2025-11-26 11:40:00.402011624 +0000 UTC m=+0.030954303 container create 80cdef9f221e7d4f4a890e3cb5614fda30547b9f4bf9d26f8b6c946cbf7f245f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_rhodes, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 26 11:40:00 compute-0 systemd[1]: Started libpod-conmon-80cdef9f221e7d4f4a890e3cb5614fda30547b9f4bf9d26f8b6c946cbf7f245f.scope.
Nov 26 11:40:00 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:40:00 compute-0 podman[98763]: 2025-11-26 11:40:00.45483587 +0000 UTC m=+0.083778548 container init 80cdef9f221e7d4f4a890e3cb5614fda30547b9f4bf9d26f8b6c946cbf7f245f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_rhodes, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 26 11:40:00 compute-0 podman[98763]: 2025-11-26 11:40:00.459355486 +0000 UTC m=+0.088298164 container start 80cdef9f221e7d4f4a890e3cb5614fda30547b9f4bf9d26f8b6c946cbf7f245f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_rhodes, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:40:00 compute-0 podman[98763]: 2025-11-26 11:40:00.460562774 +0000 UTC m=+0.089505451 container attach 80cdef9f221e7d4f4a890e3cb5614fda30547b9f4bf9d26f8b6c946cbf7f245f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_rhodes, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:40:00 compute-0 hungry_rhodes[98776]: 167 167
Nov 26 11:40:00 compute-0 systemd[1]: libpod-80cdef9f221e7d4f4a890e3cb5614fda30547b9f4bf9d26f8b6c946cbf7f245f.scope: Deactivated successfully.
Nov 26 11:40:00 compute-0 podman[98763]: 2025-11-26 11:40:00.462432771 +0000 UTC m=+0.091375449 container died 80cdef9f221e7d4f4a890e3cb5614fda30547b9f4bf9d26f8b6c946cbf7f245f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_rhodes, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 26 11:40:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-68a1f89a26a381a7350ae184be780359d3e0fd3e79a4dc19fdb99fdd3c800620-merged.mount: Deactivated successfully.
Nov 26 11:40:00 compute-0 podman[98763]: 2025-11-26 11:40:00.47887064 +0000 UTC m=+0.107813318 container remove 80cdef9f221e7d4f4a890e3cb5614fda30547b9f4bf9d26f8b6c946cbf7f245f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:40:00 compute-0 podman[98763]: 2025-11-26 11:40:00.391175644 +0000 UTC m=+0.020118321 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:40:00 compute-0 systemd[1]: libpod-conmon-80cdef9f221e7d4f4a890e3cb5614fda30547b9f4bf9d26f8b6c946cbf7f245f.scope: Deactivated successfully.
Nov 26 11:40:00 compute-0 sudo[98816]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nayccciydkdsvxcbcadxhjiwnqzqnxja ; /usr/bin/python3'
Nov 26 11:40:00 compute-0 sudo[98816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:40:00 compute-0 podman[98824]: 2025-11-26 11:40:00.593616713 +0000 UTC m=+0.028095311 container create 06cc2911f40f89890c7b03f0ef7fe94d9ba1fe71b1ec62cef5e92b77f92e229d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:40:00 compute-0 systemd[1]: Started libpod-conmon-06cc2911f40f89890c7b03f0ef7fe94d9ba1fe71b1ec62cef5e92b77f92e229d.scope.
Nov 26 11:40:00 compute-0 python3[98818]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:40:00 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Nov 26 11:40:00 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:40:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c00402cecefad4af19df9bc86f4185d152e281fca80b426bf3ac46c7a0ee616/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:40:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c00402cecefad4af19df9bc86f4185d152e281fca80b426bf3ac46c7a0ee616/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:40:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c00402cecefad4af19df9bc86f4185d152e281fca80b426bf3ac46c7a0ee616/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:40:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c00402cecefad4af19df9bc86f4185d152e281fca80b426bf3ac46c7a0ee616/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:40:00 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Nov 26 11:40:00 compute-0 podman[98824]: 2025-11-26 11:40:00.651937568 +0000 UTC m=+0.086416185 container init 06cc2911f40f89890c7b03f0ef7fe94d9ba1fe71b1ec62cef5e92b77f92e229d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_taussig, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:40:00 compute-0 podman[98824]: 2025-11-26 11:40:00.656836328 +0000 UTC m=+0.091314925 container start 06cc2911f40f89890c7b03f0ef7fe94d9ba1fe71b1ec62cef5e92b77f92e229d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 26 11:40:00 compute-0 podman[98824]: 2025-11-26 11:40:00.658833576 +0000 UTC m=+0.093312173 container attach 06cc2911f40f89890c7b03f0ef7fe94d9ba1fe71b1ec62cef5e92b77f92e229d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_taussig, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:40:00 compute-0 podman[98840]: 2025-11-26 11:40:00.662306928 +0000 UTC m=+0.030863350 container create 44932d09a3bc3114af66b11636787a366b0d34529c1ae362da685f98c4317fb2 (image=quay.io/ceph/ceph:v18, name=stupefied_chebyshev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:40:00 compute-0 podman[98824]: 2025-11-26 11:40:00.581599715 +0000 UTC m=+0.016078332 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:40:00 compute-0 systemd[1]: Started libpod-conmon-44932d09a3bc3114af66b11636787a366b0d34529c1ae362da685f98c4317fb2.scope.
Nov 26 11:40:00 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:40:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98edb1a4b809b1582c0b9d45ea95597cac3b5631991e7378f31a9c18c3faf7d1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:40:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98edb1a4b809b1582c0b9d45ea95597cac3b5631991e7378f31a9c18c3faf7d1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:40:00 compute-0 podman[98840]: 2025-11-26 11:40:00.719837212 +0000 UTC m=+0.088393664 container init 44932d09a3bc3114af66b11636787a366b0d34529c1ae362da685f98c4317fb2 (image=quay.io/ceph/ceph:v18, name=stupefied_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 26 11:40:00 compute-0 podman[98840]: 2025-11-26 11:40:00.724301954 +0000 UTC m=+0.092858376 container start 44932d09a3bc3114af66b11636787a366b0d34529c1ae362da685f98c4317fb2 (image=quay.io/ceph/ceph:v18, name=stupefied_chebyshev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:40:00 compute-0 podman[98840]: 2025-11-26 11:40:00.725859252 +0000 UTC m=+0.094415694 container attach 44932d09a3bc3114af66b11636787a366b0d34529c1ae362da685f98c4317fb2 (image=quay.io/ceph/ceph:v18, name=stupefied_chebyshev, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:40:00 compute-0 podman[98840]: 2025-11-26 11:40:00.650429282 +0000 UTC m=+0.018985725 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:40:01 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0) v1
Nov 26 11:40:01 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/169296351' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Nov 26 11:40:01 compute-0 stupefied_chebyshev[98856]: [client.openstack]
Nov 26 11:40:01 compute-0 stupefied_chebyshev[98856]:         key = AQCA5iZpAAAAABAAL6WSWuWVfNotwlMauF3Tqw==
Nov 26 11:40:01 compute-0 stupefied_chebyshev[98856]:         caps mgr = "allow *"
Nov 26 11:40:01 compute-0 stupefied_chebyshev[98856]:         caps mon = "profile rbd"
Nov 26 11:40:01 compute-0 stupefied_chebyshev[98856]:         caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Nov 26 11:40:01 compute-0 systemd[1]: libpod-44932d09a3bc3114af66b11636787a366b0d34529c1ae362da685f98c4317fb2.scope: Deactivated successfully.
Nov 26 11:40:01 compute-0 podman[98883]: 2025-11-26 11:40:01.243492701 +0000 UTC m=+0.018098030 container died 44932d09a3bc3114af66b11636787a366b0d34529c1ae362da685f98c4317fb2 (image=quay.io/ceph/ceph:v18, name=stupefied_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 26 11:40:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-98edb1a4b809b1582c0b9d45ea95597cac3b5631991e7378f31a9c18c3faf7d1-merged.mount: Deactivated successfully.
Nov 26 11:40:01 compute-0 podman[98883]: 2025-11-26 11:40:01.263435452 +0000 UTC m=+0.038040780 container remove 44932d09a3bc3114af66b11636787a366b0d34529c1ae362da685f98c4317fb2 (image=quay.io/ceph/ceph:v18, name=stupefied_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 26 11:40:01 compute-0 agitated_taussig[98838]: {
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:     "0": [
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:         {
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:             "devices": [
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:                 "/dev/loop3"
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:             ],
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:             "lv_name": "ceph_lv0",
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:             "lv_size": "21470642176",
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a9ad59a0-aa2e-4d92-b571-519d2d145b6a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:             "lv_uuid": "Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ",
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:             "name": "ceph_lv0",
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:             "tags": {
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:                 "ceph.block_uuid": "Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ",
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:                 "ceph.cluster_name": "ceph",
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:                 "ceph.crush_device_class": "",
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:                 "ceph.encrypted": "0",
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:                 "ceph.osd_fsid": "a9ad59a0-aa2e-4d92-b571-519d2d145b6a",
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:                 "ceph.osd_id": "0",
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:                 "ceph.type": "block",
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:                 "ceph.vdo": "0"
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:             },
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:             "type": "block",
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:             "vg_name": "ceph_vg0"
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:         }
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:     ],
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:     "1": [
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:         {
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:             "devices": [
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:                 "/dev/loop4"
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:             ],
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:             "lv_name": "ceph_lv1",
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:             "lv_size": "21470642176",
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2627095b-eef8-4027-bfef-68bf7cb6801f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:             "lv_uuid": "T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS",
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:             "name": "ceph_lv1",
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:             "tags": {
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:                 "ceph.block_uuid": "T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS",
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:                 "ceph.cluster_name": "ceph",
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:                 "ceph.crush_device_class": "",
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:                 "ceph.encrypted": "0",
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:                 "ceph.osd_fsid": "2627095b-eef8-4027-bfef-68bf7cb6801f",
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:                 "ceph.osd_id": "1",
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:                 "ceph.type": "block",
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:                 "ceph.vdo": "0"
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:             },
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:             "type": "block",
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:             "vg_name": "ceph_vg1"
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:         }
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:     ],
Nov 26 11:40:01 compute-0 systemd[1]: libpod-conmon-44932d09a3bc3114af66b11636787a366b0d34529c1ae362da685f98c4317fb2.scope: Deactivated successfully.
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:     "2": [
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:         {
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:             "devices": [
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:                 "/dev/loop5"
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:             ],
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:             "lv_name": "ceph_lv2",
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:             "lv_size": "21470642176",
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d56156fb-7361-4bef-b06b-1320109b4323,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:             "lv_uuid": "PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF",
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:             "name": "ceph_lv2",
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:             "tags": {
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:                 "ceph.block_uuid": "PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF",
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:                 "ceph.cluster_name": "ceph",
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:                 "ceph.crush_device_class": "",
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:                 "ceph.encrypted": "0",
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:                 "ceph.osd_fsid": "d56156fb-7361-4bef-b06b-1320109b4323",
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:                 "ceph.osd_id": "2",
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:                 "ceph.type": "block",
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:                 "ceph.vdo": "0"
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:             },
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:             "type": "block",
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:             "vg_name": "ceph_vg2"
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:         }
Nov 26 11:40:01 compute-0 agitated_taussig[98838]:     ]
Nov 26 11:40:01 compute-0 agitated_taussig[98838]: }
Nov 26 11:40:01 compute-0 sudo[98816]: pam_unix(sudo:session): session closed for user root
Nov 26 11:40:01 compute-0 systemd[1]: libpod-06cc2911f40f89890c7b03f0ef7fe94d9ba1fe71b1ec62cef5e92b77f92e229d.scope: Deactivated successfully.
Nov 26 11:40:01 compute-0 podman[98897]: 2025-11-26 11:40:01.315316208 +0000 UTC m=+0.015458252 container died 06cc2911f40f89890c7b03f0ef7fe94d9ba1fe71b1ec62cef5e92b77f92e229d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_taussig, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 26 11:40:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c00402cecefad4af19df9bc86f4185d152e281fca80b426bf3ac46c7a0ee616-merged.mount: Deactivated successfully.
Nov 26 11:40:01 compute-0 podman[98897]: 2025-11-26 11:40:01.343113815 +0000 UTC m=+0.043255858 container remove 06cc2911f40f89890c7b03f0ef7fe94d9ba1fe71b1ec62cef5e92b77f92e229d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_taussig, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Nov 26 11:40:01 compute-0 systemd[1]: libpod-conmon-06cc2911f40f89890c7b03f0ef7fe94d9ba1fe71b1ec62cef5e92b77f92e229d.scope: Deactivated successfully.
Nov 26 11:40:01 compute-0 sudo[98696]: pam_unix(sudo:session): session closed for user root
Nov 26 11:40:01 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v62: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:40:01 compute-0 ceph-mon[74928]: 2.1a scrub starts
Nov 26 11:40:01 compute-0 ceph-mon[74928]: 2.1a scrub ok
Nov 26 11:40:01 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/169296351' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Nov 26 11:40:01 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 3.d scrub starts
Nov 26 11:40:01 compute-0 sudo[98909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:40:01 compute-0 sudo[98909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:40:01 compute-0 sudo[98909]: pam_unix(sudo:session): session closed for user root
Nov 26 11:40:01 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 3.d scrub ok
Nov 26 11:40:01 compute-0 sudo[98934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:40:01 compute-0 sudo[98934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:40:01 compute-0 sudo[98934]: pam_unix(sudo:session): session closed for user root
Nov 26 11:40:01 compute-0 sudo[98959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:40:01 compute-0 sudo[98959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:40:01 compute-0 sudo[98959]: pam_unix(sudo:session): session closed for user root
Nov 26 11:40:01 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:40:01 compute-0 sudo[98984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -- raw list --format json
Nov 26 11:40:01 compute-0 sudo[98984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:40:01 compute-0 podman[99039]: 2025-11-26 11:40:01.750971506 +0000 UTC m=+0.026397656 container create 0ffa99cfc48caa990b1c4f89d2d2878f22b591e8e8f6a1e6d9925afafbf5f565 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 26 11:40:01 compute-0 systemd[1]: Started libpod-conmon-0ffa99cfc48caa990b1c4f89d2d2878f22b591e8e8f6a1e6d9925afafbf5f565.scope.
Nov 26 11:40:01 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:40:01 compute-0 podman[99039]: 2025-11-26 11:40:01.800839076 +0000 UTC m=+0.076265216 container init 0ffa99cfc48caa990b1c4f89d2d2878f22b591e8e8f6a1e6d9925afafbf5f565 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_heisenberg, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:40:01 compute-0 podman[99039]: 2025-11-26 11:40:01.805606909 +0000 UTC m=+0.081033039 container start 0ffa99cfc48caa990b1c4f89d2d2878f22b591e8e8f6a1e6d9925afafbf5f565 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_heisenberg, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 26 11:40:01 compute-0 podman[99039]: 2025-11-26 11:40:01.806740788 +0000 UTC m=+0.082166928 container attach 0ffa99cfc48caa990b1c4f89d2d2878f22b591e8e8f6a1e6d9925afafbf5f565 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 26 11:40:01 compute-0 friendly_heisenberg[99053]: 167 167
Nov 26 11:40:01 compute-0 systemd[1]: libpod-0ffa99cfc48caa990b1c4f89d2d2878f22b591e8e8f6a1e6d9925afafbf5f565.scope: Deactivated successfully.
Nov 26 11:40:01 compute-0 conmon[99053]: conmon 0ffa99cfc48caa990b1c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0ffa99cfc48caa990b1c4f89d2d2878f22b591e8e8f6a1e6d9925afafbf5f565.scope/container/memory.events
Nov 26 11:40:01 compute-0 podman[99039]: 2025-11-26 11:40:01.809067618 +0000 UTC m=+0.084493758 container died 0ffa99cfc48caa990b1c4f89d2d2878f22b591e8e8f6a1e6d9925afafbf5f565 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_heisenberg, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:40:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-fcda593a4866beff038d10d9fcb43c19614ff1490c614d9eace8ef6a2e70ec13-merged.mount: Deactivated successfully.
Nov 26 11:40:01 compute-0 podman[99039]: 2025-11-26 11:40:01.828317861 +0000 UTC m=+0.103744001 container remove 0ffa99cfc48caa990b1c4f89d2d2878f22b591e8e8f6a1e6d9925afafbf5f565 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_heisenberg, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:40:01 compute-0 podman[99039]: 2025-11-26 11:40:01.740134424 +0000 UTC m=+0.015560584 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:40:01 compute-0 systemd[1]: libpod-conmon-0ffa99cfc48caa990b1c4f89d2d2878f22b591e8e8f6a1e6d9925afafbf5f565.scope: Deactivated successfully.
Nov 26 11:40:01 compute-0 podman[99075]: 2025-11-26 11:40:01.935170816 +0000 UTC m=+0.025984268 container create b2b1b7fb8c1436b10946a12312cad942422f669c455b894e2c5f058a6646e6ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_germain, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:40:01 compute-0 systemd[1]: Started libpod-conmon-b2b1b7fb8c1436b10946a12312cad942422f669c455b894e2c5f058a6646e6ae.scope.
Nov 26 11:40:01 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:40:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c4b5e6fc60439557947765f308d19b612edd8bcbaf400aff7af58048ee3e84b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:40:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c4b5e6fc60439557947765f308d19b612edd8bcbaf400aff7af58048ee3e84b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:40:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c4b5e6fc60439557947765f308d19b612edd8bcbaf400aff7af58048ee3e84b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:40:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c4b5e6fc60439557947765f308d19b612edd8bcbaf400aff7af58048ee3e84b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:40:02 compute-0 podman[99075]: 2025-11-26 11:40:02.00422114 +0000 UTC m=+0.095034612 container init b2b1b7fb8c1436b10946a12312cad942422f669c455b894e2c5f058a6646e6ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 26 11:40:02 compute-0 podman[99075]: 2025-11-26 11:40:02.00923091 +0000 UTC m=+0.100044363 container start b2b1b7fb8c1436b10946a12312cad942422f669c455b894e2c5f058a6646e6ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:40:02 compute-0 podman[99075]: 2025-11-26 11:40:02.013421234 +0000 UTC m=+0.104234696 container attach b2b1b7fb8c1436b10946a12312cad942422f669c455b894e2c5f058a6646e6ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 26 11:40:02 compute-0 podman[99075]: 2025-11-26 11:40:01.924657143 +0000 UTC m=+0.015470615 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:40:02 compute-0 ceph-mon[74928]: pgmap v62: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:40:02 compute-0 ceph-mon[74928]: 3.d scrub starts
Nov 26 11:40:02 compute-0 ceph-mon[74928]: 3.d scrub ok
Nov 26 11:40:02 compute-0 sudo[99241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afykdjheuljzsfopljdksdsmphfgunhx ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764157202.1849709-37131-262204500277552/async_wrapper.py j333175506415 30 /home/zuul/.ansible/tmp/ansible-tmp-1764157202.1849709-37131-262204500277552/AnsiballZ_command.py _'
Nov 26 11:40:02 compute-0 sudo[99241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:40:02 compute-0 ansible-async_wrapper.py[99243]: Invoked with j333175506415 30 /home/zuul/.ansible/tmp/ansible-tmp-1764157202.1849709-37131-262204500277552/AnsiballZ_command.py _
Nov 26 11:40:02 compute-0 ansible-async_wrapper.py[99246]: Starting module and watcher
Nov 26 11:40:02 compute-0 ansible-async_wrapper.py[99246]: Start watching 99247 (30)
Nov 26 11:40:02 compute-0 ansible-async_wrapper.py[99247]: Start module (99247)
Nov 26 11:40:02 compute-0 ansible-async_wrapper.py[99243]: Return async_wrapper task started.
Nov 26 11:40:02 compute-0 sudo[99241]: pam_unix(sudo:session): session closed for user root
Nov 26 11:40:02 compute-0 python3[99249]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:40:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Nov 26 11:40:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Nov 26 11:40:02 compute-0 podman[99265]: 2025-11-26 11:40:02.696884693 +0000 UTC m=+0.029151102 container create 01c82f0b58293790dee694dab38bfdf10cd56f56ab1a4bb4cd5b0b3ab412d2d7 (image=quay.io/ceph/ceph:v18, name=pedantic_noether, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:40:02 compute-0 systemd[1]: Started libpod-conmon-01c82f0b58293790dee694dab38bfdf10cd56f56ab1a4bb4cd5b0b3ab412d2d7.scope.
Nov 26 11:40:02 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:40:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e577f4f5dc4c244135bc11fa1e72dd17c2d3fab539dfe7c18466c28c69b30b2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:40:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e577f4f5dc4c244135bc11fa1e72dd17c2d3fab539dfe7c18466c28c69b30b2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:40:02 compute-0 podman[99265]: 2025-11-26 11:40:02.74645508 +0000 UTC m=+0.078721509 container init 01c82f0b58293790dee694dab38bfdf10cd56f56ab1a4bb4cd5b0b3ab412d2d7 (image=quay.io/ceph/ceph:v18, name=pedantic_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 26 11:40:02 compute-0 podman[99265]: 2025-11-26 11:40:02.750524908 +0000 UTC m=+0.082791317 container start 01c82f0b58293790dee694dab38bfdf10cd56f56ab1a4bb4cd5b0b3ab412d2d7 (image=quay.io/ceph/ceph:v18, name=pedantic_noether, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:40:02 compute-0 podman[99265]: 2025-11-26 11:40:02.751739319 +0000 UTC m=+0.084005728 container attach 01c82f0b58293790dee694dab38bfdf10cd56f56ab1a4bb4cd5b0b3ab412d2d7 (image=quay.io/ceph/ceph:v18, name=pedantic_noether, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 26 11:40:02 compute-0 beautiful_germain[99089]: {
Nov 26 11:40:02 compute-0 beautiful_germain[99089]:     "2627095b-eef8-4027-bfef-68bf7cb6801f": {
Nov 26 11:40:02 compute-0 beautiful_germain[99089]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:40:02 compute-0 beautiful_germain[99089]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 11:40:02 compute-0 beautiful_germain[99089]:         "osd_id": 1,
Nov 26 11:40:02 compute-0 beautiful_germain[99089]:         "osd_uuid": "2627095b-eef8-4027-bfef-68bf7cb6801f",
Nov 26 11:40:02 compute-0 beautiful_germain[99089]:         "type": "bluestore"
Nov 26 11:40:02 compute-0 beautiful_germain[99089]:     },
Nov 26 11:40:02 compute-0 beautiful_germain[99089]:     "a9ad59a0-aa2e-4d92-b571-519d2d145b6a": {
Nov 26 11:40:02 compute-0 beautiful_germain[99089]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:40:02 compute-0 beautiful_germain[99089]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 11:40:02 compute-0 beautiful_germain[99089]:         "osd_id": 0,
Nov 26 11:40:02 compute-0 beautiful_germain[99089]:         "osd_uuid": "a9ad59a0-aa2e-4d92-b571-519d2d145b6a",
Nov 26 11:40:02 compute-0 beautiful_germain[99089]:         "type": "bluestore"
Nov 26 11:40:02 compute-0 beautiful_germain[99089]:     },
Nov 26 11:40:02 compute-0 beautiful_germain[99089]:     "d56156fb-7361-4bef-b06b-1320109b4323": {
Nov 26 11:40:02 compute-0 beautiful_germain[99089]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:40:02 compute-0 beautiful_germain[99089]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 11:40:02 compute-0 beautiful_germain[99089]:         "osd_id": 2,
Nov 26 11:40:02 compute-0 beautiful_germain[99089]:         "osd_uuid": "d56156fb-7361-4bef-b06b-1320109b4323",
Nov 26 11:40:02 compute-0 beautiful_germain[99089]:         "type": "bluestore"
Nov 26 11:40:02 compute-0 beautiful_germain[99089]:     }
Nov 26 11:40:02 compute-0 beautiful_germain[99089]: }
Nov 26 11:40:02 compute-0 podman[99265]: 2025-11-26 11:40:02.685054837 +0000 UTC m=+0.017321266 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:40:02 compute-0 systemd[1]: libpod-b2b1b7fb8c1436b10946a12312cad942422f669c455b894e2c5f058a6646e6ae.scope: Deactivated successfully.
Nov 26 11:40:02 compute-0 podman[99075]: 2025-11-26 11:40:02.78657896 +0000 UTC m=+0.877392422 container died b2b1b7fb8c1436b10946a12312cad942422f669c455b894e2c5f058a6646e6ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_germain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 26 11:40:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c4b5e6fc60439557947765f308d19b612edd8bcbaf400aff7af58048ee3e84b-merged.mount: Deactivated successfully.
Nov 26 11:40:02 compute-0 podman[99075]: 2025-11-26 11:40:02.817346358 +0000 UTC m=+0.908159810 container remove b2b1b7fb8c1436b10946a12312cad942422f669c455b894e2c5f058a6646e6ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_germain, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:40:02 compute-0 systemd[1]: libpod-conmon-b2b1b7fb8c1436b10946a12312cad942422f669c455b894e2c5f058a6646e6ae.scope: Deactivated successfully.
Nov 26 11:40:02 compute-0 sudo[98984]: pam_unix(sudo:session): session closed for user root
Nov 26 11:40:02 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 11:40:02 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:40:02 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 11:40:02 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:40:02 compute-0 ceph-mgr[75197]: [progress INFO root] update: starting ev 356dba9e-cead-4141-b935-8a7a3f2922a9 (Updating rgw.rgw deployment (+1 -> 1))
Nov 26 11:40:02 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.oyquem", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Nov 26 11:40:02 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.oyquem", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 26 11:40:02 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.oyquem", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 26 11:40:02 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Nov 26 11:40:02 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:40:02 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 11:40:02 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:40:02 compute-0 ceph-mgr[75197]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.oyquem on compute-0
Nov 26 11:40:02 compute-0 ceph-mgr[75197]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.oyquem on compute-0
Nov 26 11:40:02 compute-0 sudo[99304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:40:02 compute-0 sudo[99304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:40:02 compute-0 sudo[99304]: pam_unix(sudo:session): session closed for user root
Nov 26 11:40:02 compute-0 sudo[99329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:40:02 compute-0 sudo[99329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:40:02 compute-0 sudo[99329]: pam_unix(sudo:session): session closed for user root
Nov 26 11:40:02 compute-0 sudo[99354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:40:02 compute-0 sudo[99354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:40:02 compute-0 sudo[99354]: pam_unix(sudo:session): session closed for user root
Nov 26 11:40:03 compute-0 sudo[99379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7
Nov 26 11:40:03 compute-0 sudo[99379]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:40:03 compute-0 ceph-mgr[75197]: log_channel(audit) log [DBG] : from='client.14258 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 26 11:40:03 compute-0 pedantic_noether[99285]: 
Nov 26 11:40:03 compute-0 pedantic_noether[99285]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 26 11:40:03 compute-0 systemd[1]: libpod-01c82f0b58293790dee694dab38bfdf10cd56f56ab1a4bb4cd5b0b3ab412d2d7.scope: Deactivated successfully.
Nov 26 11:40:03 compute-0 podman[99265]: 2025-11-26 11:40:03.213823022 +0000 UTC m=+0.546089431 container died 01c82f0b58293790dee694dab38bfdf10cd56f56ab1a4bb4cd5b0b3ab412d2d7 (image=quay.io/ceph/ceph:v18, name=pedantic_noether, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 26 11:40:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e577f4f5dc4c244135bc11fa1e72dd17c2d3fab539dfe7c18466c28c69b30b2-merged.mount: Deactivated successfully.
Nov 26 11:40:03 compute-0 podman[99265]: 2025-11-26 11:40:03.241263775 +0000 UTC m=+0.573530173 container remove 01c82f0b58293790dee694dab38bfdf10cd56f56ab1a4bb4cd5b0b3ab412d2d7 (image=quay.io/ceph/ceph:v18, name=pedantic_noether, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 26 11:40:03 compute-0 systemd[1]: libpod-conmon-01c82f0b58293790dee694dab38bfdf10cd56f56ab1a4bb4cd5b0b3ab412d2d7.scope: Deactivated successfully.
Nov 26 11:40:03 compute-0 ansible-async_wrapper.py[99247]: Module complete (99247)
Nov 26 11:40:03 compute-0 podman[99468]: 2025-11-26 11:40:03.274953105 +0000 UTC m=+0.027482955 container create ffcc46675cede803c96cc6e0b7c3b898f2e47928826d99eccb85dc2339ba671c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 26 11:40:03 compute-0 systemd[1]: Started libpod-conmon-ffcc46675cede803c96cc6e0b7c3b898f2e47928826d99eccb85dc2339ba671c.scope.
Nov 26 11:40:03 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:40:03 compute-0 podman[99468]: 2025-11-26 11:40:03.32663159 +0000 UTC m=+0.079189172 container init ffcc46675cede803c96cc6e0b7c3b898f2e47928826d99eccb85dc2339ba671c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_galois, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 26 11:40:03 compute-0 podman[99468]: 2025-11-26 11:40:03.330651314 +0000 UTC m=+0.083181173 container start ffcc46675cede803c96cc6e0b7c3b898f2e47928826d99eccb85dc2339ba671c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_galois, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 26 11:40:03 compute-0 podman[99468]: 2025-11-26 11:40:03.331842931 +0000 UTC m=+0.084372780 container attach ffcc46675cede803c96cc6e0b7c3b898f2e47928826d99eccb85dc2339ba671c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_galois, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:40:03 compute-0 determined_galois[99482]: 167 167
Nov 26 11:40:03 compute-0 systemd[1]: libpod-ffcc46675cede803c96cc6e0b7c3b898f2e47928826d99eccb85dc2339ba671c.scope: Deactivated successfully.
Nov 26 11:40:03 compute-0 podman[99468]: 2025-11-26 11:40:03.333727887 +0000 UTC m=+0.086257736 container died ffcc46675cede803c96cc6e0b7c3b898f2e47928826d99eccb85dc2339ba671c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_galois, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:40:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-f45706648e56073d0870efcbf11935c2ad0a8474da7632bbb8f7abc274402116-merged.mount: Deactivated successfully.
Nov 26 11:40:03 compute-0 podman[99468]: 2025-11-26 11:40:03.350278858 +0000 UTC m=+0.102808706 container remove ffcc46675cede803c96cc6e0b7c3b898f2e47928826d99eccb85dc2339ba671c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_galois, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Nov 26 11:40:03 compute-0 podman[99468]: 2025-11-26 11:40:03.264996042 +0000 UTC m=+0.017525911 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:40:03 compute-0 systemd[1]: libpod-conmon-ffcc46675cede803c96cc6e0b7c3b898f2e47928826d99eccb85dc2339ba671c.scope: Deactivated successfully.
Nov 26 11:40:03 compute-0 systemd[1]: Reloading.
Nov 26 11:40:03 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v63: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:40:03 compute-0 systemd-rc-local-generator[99519]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:40:03 compute-0 systemd-sysv-generator[99522]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:40:03 compute-0 systemd[1]: Reloading.
Nov 26 11:40:03 compute-0 systemd-rc-local-generator[99603]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:40:03 compute-0 systemd-sysv-generator[99606]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:40:03 compute-0 sudo[99591]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izyhtcvzpkdbrovqxkuaxrkelrmsxdbw ; /usr/bin/python3'
Nov 26 11:40:03 compute-0 sudo[99591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:40:03 compute-0 systemd[1]: Starting Ceph rgw.rgw.compute-0.oyquem for ebab460c-3fd7-5f66-aa87-e10c143123f7...
Nov 26 11:40:03 compute-0 ceph-mon[74928]: 2.1e scrub starts
Nov 26 11:40:03 compute-0 ceph-mon[74928]: 2.1e scrub ok
Nov 26 11:40:03 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:40:03 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:40:03 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.oyquem", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 26 11:40:03 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.oyquem", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 26 11:40:03 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:40:03 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:40:03 compute-0 ceph-mon[74928]: Deploying daemon rgw.rgw.compute-0.oyquem on compute-0
Nov 26 11:40:03 compute-0 ceph-mon[74928]: from='client.14258 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 26 11:40:03 compute-0 ceph-mon[74928]: pgmap v63: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:40:03 compute-0 python3[99624]: ansible-ansible.legacy.async_status Invoked with jid=j333175506415.99243 mode=status _async_dir=/root/.ansible_async
Nov 26 11:40:03 compute-0 sudo[99591]: pam_unix(sudo:session): session closed for user root
Nov 26 11:40:03 compute-0 podman[99662]: 2025-11-26 11:40:03.978133597 +0000 UTC m=+0.029293390 container create 055a3b698c561c16317a692fd8f32051abc1b015333c89be3328cbd4183162a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-rgw-rgw-compute-0-oyquem, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 26 11:40:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/426c09f9d21898c16b119440c591ee902feacccec8b69269e45fd1dfd52d070b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:40:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/426c09f9d21898c16b119440c591ee902feacccec8b69269e45fd1dfd52d070b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:40:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/426c09f9d21898c16b119440c591ee902feacccec8b69269e45fd1dfd52d070b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:40:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/426c09f9d21898c16b119440c591ee902feacccec8b69269e45fd1dfd52d070b/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.oyquem supports timestamps until 2038 (0x7fffffff)
Nov 26 11:40:04 compute-0 rsyslogd[960]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 11:40:04 compute-0 podman[99662]: 2025-11-26 11:40:04.021359858 +0000 UTC m=+0.072519661 container init 055a3b698c561c16317a692fd8f32051abc1b015333c89be3328cbd4183162a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-rgw-rgw-compute-0-oyquem, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 26 11:40:04 compute-0 podman[99662]: 2025-11-26 11:40:04.025138165 +0000 UTC m=+0.076297958 container start 055a3b698c561c16317a692fd8f32051abc1b015333c89be3328cbd4183162a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-rgw-rgw-compute-0-oyquem, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 26 11:40:04 compute-0 bash[99662]: 055a3b698c561c16317a692fd8f32051abc1b015333c89be3328cbd4183162a1
Nov 26 11:40:04 compute-0 podman[99662]: 2025-11-26 11:40:03.966251653 +0000 UTC m=+0.017411466 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:40:04 compute-0 systemd[1]: Started Ceph rgw.rgw.compute-0.oyquem for ebab460c-3fd7-5f66-aa87-e10c143123f7.
Nov 26 11:40:04 compute-0 sudo[99726]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkbmzmdovppghmqigsvutkwxtajzdcir ; /usr/bin/python3'
Nov 26 11:40:04 compute-0 sudo[99726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:40:04 compute-0 sudo[99379]: pam_unix(sudo:session): session closed for user root
Nov 26 11:40:04 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 11:40:04 compute-0 radosgw[99725]: deferred set uid:gid to 167:167 (ceph:ceph)
Nov 26 11:40:04 compute-0 radosgw[99725]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process radosgw, pid 2
Nov 26 11:40:04 compute-0 radosgw[99725]: framework: beast
Nov 26 11:40:04 compute-0 radosgw[99725]: framework conf key: endpoint, val: 192.168.122.100:8082
Nov 26 11:40:04 compute-0 radosgw[99725]: init_numa not setting numa affinity
Nov 26 11:40:04 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:40:04 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 11:40:04 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:40:04 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 26 11:40:04 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:40:04 compute-0 ceph-mgr[75197]: [progress INFO root] complete: finished ev 356dba9e-cead-4141-b935-8a7a3f2922a9 (Updating rgw.rgw deployment (+1 -> 1))
Nov 26 11:40:04 compute-0 ceph-mgr[75197]: [progress INFO root] Completed event 356dba9e-cead-4141-b935-8a7a3f2922a9 (Updating rgw.rgw deployment (+1 -> 1)) in 1 seconds
Nov 26 11:40:04 compute-0 ceph-mgr[75197]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0
Nov 26 11:40:04 compute-0 ceph-mgr[75197]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Nov 26 11:40:04 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 26 11:40:04 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:40:04 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 26 11:40:04 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:40:04 compute-0 ceph-mgr[75197]: [progress INFO root] update: starting ev ff3865fc-48b0-42cf-8a1c-29a3da6f4e27 (Updating mds.cephfs deployment (+1 -> 1))
Nov 26 11:40:04 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.hvqwax", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Nov 26 11:40:04 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.hvqwax", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 26 11:40:04 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.hvqwax", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 26 11:40:04 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 11:40:04 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:40:04 compute-0 ceph-mgr[75197]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.hvqwax on compute-0
Nov 26 11:40:04 compute-0 ceph-mgr[75197]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.hvqwax on compute-0
Nov 26 11:40:04 compute-0 sudo[99790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:40:04 compute-0 sudo[99790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:40:04 compute-0 sudo[99790]: pam_unix(sudo:session): session closed for user root
Nov 26 11:40:04 compute-0 python3[99736]: ansible-ansible.legacy.async_status Invoked with jid=j333175506415.99243 mode=cleanup _async_dir=/root/.ansible_async
Nov 26 11:40:04 compute-0 sudo[99726]: pam_unix(sudo:session): session closed for user root
Nov 26 11:40:04 compute-0 sudo[99815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:40:04 compute-0 sudo[99815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:40:04 compute-0 sudo[99815]: pam_unix(sudo:session): session closed for user root
Nov 26 11:40:04 compute-0 sudo[99840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:40:04 compute-0 sudo[99840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:40:04 compute-0 sudo[99840]: pam_unix(sudo:session): session closed for user root
Nov 26 11:40:04 compute-0 sudo[99865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7
Nov 26 11:40:04 compute-0 sudo[99865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:40:04 compute-0 podman[99923]: 2025-11-26 11:40:04.53862915 +0000 UTC m=+0.029358081 container create ec423b35f255e4baa7ab2f60e75c9e9ed66bc17cd1138ef22d8555db915887b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_montalcini, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 26 11:40:04 compute-0 sudo[99956]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbmnduwnychzykzgcbwwlfhcdacwzlev ; /usr/bin/python3'
Nov 26 11:40:04 compute-0 sudo[99956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:40:04 compute-0 systemd[1]: Started libpod-conmon-ec423b35f255e4baa7ab2f60e75c9e9ed66bc17cd1138ef22d8555db915887b5.scope.
Nov 26 11:40:04 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:40:04 compute-0 podman[99923]: 2025-11-26 11:40:04.598839579 +0000 UTC m=+0.089568531 container init ec423b35f255e4baa7ab2f60e75c9e9ed66bc17cd1138ef22d8555db915887b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 26 11:40:04 compute-0 podman[99923]: 2025-11-26 11:40:04.603695069 +0000 UTC m=+0.094424000 container start ec423b35f255e4baa7ab2f60e75c9e9ed66bc17cd1138ef22d8555db915887b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 26 11:40:04 compute-0 podman[99923]: 2025-11-26 11:40:04.604746402 +0000 UTC m=+0.095475354 container attach ec423b35f255e4baa7ab2f60e75c9e9ed66bc17cd1138ef22d8555db915887b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 26 11:40:04 compute-0 nostalgic_montalcini[99962]: 167 167
Nov 26 11:40:04 compute-0 podman[99923]: 2025-11-26 11:40:04.607099 +0000 UTC m=+0.097827932 container died ec423b35f255e4baa7ab2f60e75c9e9ed66bc17cd1138ef22d8555db915887b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 26 11:40:04 compute-0 systemd[1]: libpod-ec423b35f255e4baa7ab2f60e75c9e9ed66bc17cd1138ef22d8555db915887b5.scope: Deactivated successfully.
Nov 26 11:40:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-32e3e0513cd00f0d2d2d69dfec2b5e2fec64d1b39031a9146fde1ab3482cc988-merged.mount: Deactivated successfully.
Nov 26 11:40:04 compute-0 podman[99923]: 2025-11-26 11:40:04.526077574 +0000 UTC m=+0.016806525 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:40:04 compute-0 podman[99923]: 2025-11-26 11:40:04.624487321 +0000 UTC m=+0.115216252 container remove ec423b35f255e4baa7ab2f60e75c9e9ed66bc17cd1138ef22d8555db915887b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_montalcini, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 26 11:40:04 compute-0 systemd[1]: libpod-conmon-ec423b35f255e4baa7ab2f60e75c9e9ed66bc17cd1138ef22d8555db915887b5.scope: Deactivated successfully.
Nov 26 11:40:04 compute-0 systemd[1]: Reloading.
Nov 26 11:40:04 compute-0 python3[99959]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:40:04 compute-0 systemd-rc-local-generator[100004]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:40:04 compute-0 systemd-sysv-generator[100010]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:40:04 compute-0 podman[99980]: 2025-11-26 11:40:04.729045447 +0000 UTC m=+0.039158979 container create de7e964296e9053415989d967695754218cfd6465e0eca9583c443edad4195c5 (image=quay.io/ceph/ceph:v18, name=elastic_margulis, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:40:04 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 4.c scrub starts
Nov 26 11:40:04 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 4.c scrub ok
Nov 26 11:40:04 compute-0 podman[99980]: 2025-11-26 11:40:04.713071003 +0000 UTC m=+0.023184545 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:40:04 compute-0 systemd[1]: Started libpod-conmon-de7e964296e9053415989d967695754218cfd6465e0eca9583c443edad4195c5.scope.
Nov 26 11:40:04 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:40:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/024d675b620fc2fdea494ce29d4355288efe17a538db7272f852225e51ff8d0d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:40:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/024d675b620fc2fdea494ce29d4355288efe17a538db7272f852225e51ff8d0d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:40:04 compute-0 podman[99980]: 2025-11-26 11:40:04.901784197 +0000 UTC m=+0.211897739 container init de7e964296e9053415989d967695754218cfd6465e0eca9583c443edad4195c5 (image=quay.io/ceph/ceph:v18, name=elastic_margulis, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 26 11:40:04 compute-0 podman[99980]: 2025-11-26 11:40:04.908674414 +0000 UTC m=+0.218787936 container start de7e964296e9053415989d967695754218cfd6465e0eca9583c443edad4195c5 (image=quay.io/ceph/ceph:v18, name=elastic_margulis, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:40:04 compute-0 systemd[1]: Reloading.
Nov 26 11:40:04 compute-0 podman[99980]: 2025-11-26 11:40:04.909948197 +0000 UTC m=+0.220061719 container attach de7e964296e9053415989d967695754218cfd6465e0eca9583c443edad4195c5 (image=quay.io/ceph/ceph:v18, name=elastic_margulis, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 26 11:40:04 compute-0 systemd-sysv-generator[100058]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:40:04 compute-0 systemd-rc-local-generator[100055]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:40:05 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Nov 26 11:40:05 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Nov 26 11:40:05 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Nov 26 11:40:05 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0) v1
Nov 26 11:40:05 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2607986314' entity='client.rgw.rgw.compute-0.oyquem' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 26 11:40:05 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:40:05 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:40:05 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:40:05 compute-0 ceph-mon[74928]: Saving service rgw.rgw spec with placement compute-0
Nov 26 11:40:05 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:40:05 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:40:05 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.hvqwax", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 26 11:40:05 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.hvqwax", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 26 11:40:05 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:40:05 compute-0 ceph-mon[74928]: Deploying daemon mds.cephfs.compute-0.hvqwax on compute-0
Nov 26 11:40:05 compute-0 ceph-mon[74928]: 4.c scrub starts
Nov 26 11:40:05 compute-0 ceph-mon[74928]: 4.c scrub ok
Nov 26 11:40:05 compute-0 systemd[1]: Starting Ceph mds.cephfs.compute-0.hvqwax for ebab460c-3fd7-5f66-aa87-e10c143123f7...
Nov 26 11:40:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 32 pg[8.0( empty local-lis/les=0/0 n=0 ec=32/32 lis/c=0/0 les/c/f=0/0/0 sis=32) [1] r=0 lpr=32 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:05 compute-0 podman[100129]: 2025-11-26 11:40:05.301528053 +0000 UTC m=+0.030124426 container create 022a553e1181d0ee3662afb543d492866899bf8e839c23e8f0bb5d99f487f636 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mds-cephfs-compute-0-hvqwax, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:40:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c4359c38715eef90c6195eda91c6fe95fc5599d68f27c4bd2fdac87479c1c86/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:40:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c4359c38715eef90c6195eda91c6fe95fc5599d68f27c4bd2fdac87479c1c86/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:40:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c4359c38715eef90c6195eda91c6fe95fc5599d68f27c4bd2fdac87479c1c86/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:40:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c4359c38715eef90c6195eda91c6fe95fc5599d68f27c4bd2fdac87479c1c86/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.hvqwax supports timestamps until 2038 (0x7fffffff)
Nov 26 11:40:05 compute-0 podman[100129]: 2025-11-26 11:40:05.350376089 +0000 UTC m=+0.078972482 container init 022a553e1181d0ee3662afb543d492866899bf8e839c23e8f0bb5d99f487f636 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mds-cephfs-compute-0-hvqwax, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:40:05 compute-0 podman[100129]: 2025-11-26 11:40:05.354651374 +0000 UTC m=+0.083247746 container start 022a553e1181d0ee3662afb543d492866899bf8e839c23e8f0bb5d99f487f636 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mds-cephfs-compute-0-hvqwax, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:40:05 compute-0 bash[100129]: 022a553e1181d0ee3662afb543d492866899bf8e839c23e8f0bb5d99f487f636
Nov 26 11:40:05 compute-0 podman[100129]: 2025-11-26 11:40:05.287818351 +0000 UTC m=+0.016414744 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:40:05 compute-0 systemd[1]: Started Ceph mds.cephfs.compute-0.hvqwax for ebab460c-3fd7-5f66-aa87-e10c143123f7.
Nov 26 11:40:05 compute-0 ceph-mgr[75197]: log_channel(audit) log [DBG] : from='client.14263 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 26 11:40:05 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v65: 132 pgs: 1 unknown, 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:40:05 compute-0 elastic_margulis[100028]: 
Nov 26 11:40:05 compute-0 elastic_margulis[100028]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 26 11:40:05 compute-0 ceph-mds[100145]: set uid:gid to 167:167 (ceph:ceph)
Nov 26 11:40:05 compute-0 ceph-mds[100145]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Nov 26 11:40:05 compute-0 ceph-mds[100145]: main not setting numa affinity
Nov 26 11:40:05 compute-0 ceph-mds[100145]: pidfile_write: ignore empty --pid-file
Nov 26 11:40:05 compute-0 sudo[99865]: pam_unix(sudo:session): session closed for user root
Nov 26 11:40:05 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mds-cephfs-compute-0-hvqwax[100141]: starting mds.cephfs.compute-0.hvqwax at 
Nov 26 11:40:05 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 11:40:05 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:40:05 compute-0 ceph-mds[100145]: mds.cephfs.compute-0.hvqwax Updating MDS map to version 2 from mon.0
Nov 26 11:40:05 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 11:40:05 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:40:05 compute-0 systemd[1]: libpod-de7e964296e9053415989d967695754218cfd6465e0eca9583c443edad4195c5.scope: Deactivated successfully.
Nov 26 11:40:05 compute-0 podman[99980]: 2025-11-26 11:40:05.397980378 +0000 UTC m=+0.708093900 container died de7e964296e9053415989d967695754218cfd6465e0eca9583c443edad4195c5 (image=quay.io/ceph/ceph:v18, name=elastic_margulis, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 26 11:40:05 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 26 11:40:05 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:40:05 compute-0 ceph-mgr[75197]: [progress INFO root] complete: finished ev ff3865fc-48b0-42cf-8a1c-29a3da6f4e27 (Updating mds.cephfs deployment (+1 -> 1))
Nov 26 11:40:05 compute-0 ceph-mgr[75197]: [progress INFO root] Completed event ff3865fc-48b0-42cf-8a1c-29a3da6f4e27 (Updating mds.cephfs deployment (+1 -> 1)) in 1 seconds
Nov 26 11:40:05 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0) v1
Nov 26 11:40:05 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:40:05 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 26 11:40:05 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:40:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-024d675b620fc2fdea494ce29d4355288efe17a538db7272f852225e51ff8d0d-merged.mount: Deactivated successfully.
Nov 26 11:40:05 compute-0 podman[99980]: 2025-11-26 11:40:05.424146828 +0000 UTC m=+0.734260350 container remove de7e964296e9053415989d967695754218cfd6465e0eca9583c443edad4195c5 (image=quay.io/ceph/ceph:v18, name=elastic_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:40:05 compute-0 systemd[1]: libpod-conmon-de7e964296e9053415989d967695754218cfd6465e0eca9583c443edad4195c5.scope: Deactivated successfully.
Nov 26 11:40:05 compute-0 sudo[99956]: pam_unix(sudo:session): session closed for user root
Nov 26 11:40:05 compute-0 sudo[100173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:40:05 compute-0 sudo[100173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:40:05 compute-0 sudo[100173]: pam_unix(sudo:session): session closed for user root
Nov 26 11:40:05 compute-0 sudo[100202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 26 11:40:05 compute-0 sudo[100202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:40:05 compute-0 sudo[100202]: pam_unix(sudo:session): session closed for user root
Nov 26 11:40:05 compute-0 sudo[100227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:40:05 compute-0 sudo[100227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:40:05 compute-0 sudo[100227]: pam_unix(sudo:session): session closed for user root
Nov 26 11:40:05 compute-0 sudo[100252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:40:05 compute-0 sudo[100252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:40:05 compute-0 sudo[100252]: pam_unix(sudo:session): session closed for user root
Nov 26 11:40:05 compute-0 sudo[100277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:40:05 compute-0 sudo[100277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:40:05 compute-0 sudo[100277]: pam_unix(sudo:session): session closed for user root
Nov 26 11:40:05 compute-0 sudo[100302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 26 11:40:05 compute-0 sudo[100302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:40:05 compute-0 podman[100381]: 2025-11-26 11:40:05.988716967 +0000 UTC m=+0.037280977 container exec 810eaed6cbde00330c0646cedaef5c2ae94579236d67ba42b8ef02d055d04ad5 (image=quay.io/ceph/ceph:v18, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Nov 26 11:40:05 compute-0 sudo[100420]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnlhtrrrodnvwpndgroxebdpdvpqrfld ; /usr/bin/python3'
Nov 26 11:40:05 compute-0 sudo[100420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:40:06 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Nov 26 11:40:06 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2607986314' entity='client.rgw.rgw.compute-0.oyquem' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Nov 26 11:40:06 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Nov 26 11:40:06 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Nov 26 11:40:06 compute-0 ceph-mon[74928]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 26 11:40:06 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 33 pg[8.0( empty local-lis/les=32/33 n=0 ec=32/32 lis/c=0/0 les/c/f=0/0/0 sis=32) [1] r=0 lpr=32 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:06 compute-0 ceph-mon[74928]: osdmap e32: 3 total, 3 up, 3 in
Nov 26 11:40:06 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/2607986314' entity='client.rgw.rgw.compute-0.oyquem' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 26 11:40:06 compute-0 ceph-mon[74928]: from='client.14263 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 26 11:40:06 compute-0 ceph-mon[74928]: pgmap v65: 132 pgs: 1 unknown, 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:40:06 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:40:06 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:40:06 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:40:06 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:40:06 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:40:06 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/2607986314' entity='client.rgw.rgw.compute-0.oyquem' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Nov 26 11:40:06 compute-0 podman[100381]: 2025-11-26 11:40:06.094855595 +0000 UTC m=+0.143419624 container exec_died 810eaed6cbde00330c0646cedaef5c2ae94579236d67ba42b8ef02d055d04ad5 (image=quay.io/ceph/ceph:v18, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 26 11:40:06 compute-0 python3[100422]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:40:06 compute-0 podman[100439]: 2025-11-26 11:40:06.158443005 +0000 UTC m=+0.028536481 container create 74bcc955997261a52850aa96ec091ab2d9e733d0dfc9ed4490296c6dc2e7b123 (image=quay.io/ceph/ceph:v18, name=upbeat_lichterman, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 26 11:40:06 compute-0 systemd[1]: Started libpod-conmon-74bcc955997261a52850aa96ec091ab2d9e733d0dfc9ed4490296c6dc2e7b123.scope.
Nov 26 11:40:06 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:40:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3c49f422a8978ac8d7ac13283b7d6b810be00594d8b14a38fe25641e738031a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:40:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3c49f422a8978ac8d7ac13283b7d6b810be00594d8b14a38fe25641e738031a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:40:06 compute-0 podman[100439]: 2025-11-26 11:40:06.213592868 +0000 UTC m=+0.083686354 container init 74bcc955997261a52850aa96ec091ab2d9e733d0dfc9ed4490296c6dc2e7b123 (image=quay.io/ceph/ceph:v18, name=upbeat_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 26 11:40:06 compute-0 podman[100439]: 2025-11-26 11:40:06.220695205 +0000 UTC m=+0.090788682 container start 74bcc955997261a52850aa96ec091ab2d9e733d0dfc9ed4490296c6dc2e7b123 (image=quay.io/ceph/ceph:v18, name=upbeat_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 26 11:40:06 compute-0 podman[100439]: 2025-11-26 11:40:06.228691309 +0000 UTC m=+0.098784785 container attach 74bcc955997261a52850aa96ec091ab2d9e733d0dfc9ed4490296c6dc2e7b123 (image=quay.io/ceph/ceph:v18, name=upbeat_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:40:06 compute-0 podman[100439]: 2025-11-26 11:40:06.146377445 +0000 UTC m=+0.016470921 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:40:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Nov 26 11:40:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Nov 26 11:40:06 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).mds e3 new map
Nov 26 11:40:06 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).mds e3 print_map
                                           e3
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-26T11:39:55.253383+0000
                                           modified        2025-11-26T11:39:55.253442+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.hvqwax{-1:14265} state up:standby seq 1 addr [v2:192.168.122.100:6814/2321421268,v1:192.168.122.100:6815/2321421268] compat {c=[1],r=[1],i=[7ff]}]
Nov 26 11:40:06 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/2321421268,v1:192.168.122.100:6815/2321421268] up:boot
Nov 26 11:40:06 compute-0 ceph-mds[100145]: mds.cephfs.compute-0.hvqwax Updating MDS map to version 3 from mon.0
Nov 26 11:40:06 compute-0 ceph-mds[100145]: mds.cephfs.compute-0.hvqwax Monitors have assigned me to become a standby.
Nov 26 11:40:06 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.100:6814/2321421268,v1:192.168.122.100:6815/2321421268] as mds.0
Nov 26 11:40:06 compute-0 ceph-mon[74928]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.hvqwax assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Nov 26 11:40:06 compute-0 ceph-mon[74928]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Nov 26 11:40:06 compute-0 ceph-mon[74928]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Nov 26 11:40:06 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Nov 26 11:40:06 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.hvqwax"} v 0) v1
Nov 26 11:40:06 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.hvqwax"}]: dispatch
Nov 26 11:40:06 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).mds e3 all = 0
Nov 26 11:40:06 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).mds e4 new map
Nov 26 11:40:06 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).mds e4 print_map
                                           e4
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-26T11:39:55.253383+0000
                                           modified        2025-11-26T11:40:06.395225+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=14265}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           [mds.cephfs.compute-0.hvqwax{0:14265} state up:creating seq 1 addr [v2:192.168.122.100:6814/2321421268,v1:192.168.122.100:6815/2321421268] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
Nov 26 11:40:06 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.hvqwax=up:creating}
Nov 26 11:40:06 compute-0 ceph-mds[100145]: mds.cephfs.compute-0.hvqwax Updating MDS map to version 4 from mon.0
Nov 26 11:40:06 compute-0 ceph-mds[100145]: mds.0.4 handle_mds_map i am now mds.0.4
Nov 26 11:40:06 compute-0 ceph-mds[100145]: mds.0.4 handle_mds_map state change up:standby --> up:creating
Nov 26 11:40:06 compute-0 ceph-mds[100145]: mds.0.cache creating system inode with ino:0x1
Nov 26 11:40:06 compute-0 ceph-mds[100145]: mds.0.cache creating system inode with ino:0x100
Nov 26 11:40:06 compute-0 ceph-mds[100145]: mds.0.cache creating system inode with ino:0x600
Nov 26 11:40:06 compute-0 ceph-mds[100145]: mds.0.cache creating system inode with ino:0x601
Nov 26 11:40:06 compute-0 ceph-mds[100145]: mds.0.cache creating system inode with ino:0x602
Nov 26 11:40:06 compute-0 ceph-mds[100145]: mds.0.cache creating system inode with ino:0x603
Nov 26 11:40:06 compute-0 ceph-mds[100145]: mds.0.cache creating system inode with ino:0x604
Nov 26 11:40:06 compute-0 ceph-mds[100145]: mds.0.cache creating system inode with ino:0x605
Nov 26 11:40:06 compute-0 ceph-mds[100145]: mds.0.cache creating system inode with ino:0x606
Nov 26 11:40:06 compute-0 ceph-mds[100145]: mds.0.cache creating system inode with ino:0x607
Nov 26 11:40:06 compute-0 ceph-mds[100145]: mds.0.cache creating system inode with ino:0x608
Nov 26 11:40:06 compute-0 ceph-mds[100145]: mds.0.cache creating system inode with ino:0x609
Nov 26 11:40:06 compute-0 ceph-mds[100145]: mds.0.4 creating_done
Nov 26 11:40:06 compute-0 ceph-mon[74928]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.hvqwax is now active in filesystem cephfs as rank 0
Nov 26 11:40:06 compute-0 ceph-mgr[75197]: [progress INFO root] Writing back 9 completed events
Nov 26 11:40:06 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 26 11:40:06 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:40:06 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:40:06 compute-0 sudo[100302]: pam_unix(sudo:session): session closed for user root
Nov 26 11:40:06 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 11:40:06 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:40:06 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 11:40:06 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:40:06 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 11:40:06 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:40:06 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 11:40:06 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 11:40:06 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 11:40:06 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:40:06 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev feb55969-2f5b-4c7c-81f9-b59885395c39 does not exist
Nov 26 11:40:06 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 1467dd58-aeff-43b0-bba7-96ba01c75061 does not exist
Nov 26 11:40:06 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 3acfe1f5-6f0a-4f12-b396-b63a6eed41ca does not exist
Nov 26 11:40:06 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 11:40:06 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 11:40:06 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 11:40:06 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 11:40:06 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 11:40:06 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:40:06 compute-0 sudo[100584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:40:06 compute-0 sudo[100584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:40:06 compute-0 sudo[100584]: pam_unix(sudo:session): session closed for user root
Nov 26 11:40:06 compute-0 sudo[100609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:40:06 compute-0 sudo[100609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:40:06 compute-0 sudo[100609]: pam_unix(sudo:session): session closed for user root
Nov 26 11:40:06 compute-0 ceph-mgr[75197]: log_channel(audit) log [DBG] : from='client.14267 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 26 11:40:06 compute-0 upbeat_lichterman[100464]: 
Nov 26 11:40:06 compute-0 upbeat_lichterman[100464]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1", "/dev/ceph_vg2/ceph_lv2"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Nov 26 11:40:06 compute-0 systemd[1]: libpod-74bcc955997261a52850aa96ec091ab2d9e733d0dfc9ed4490296c6dc2e7b123.scope: Deactivated successfully.
Nov 26 11:40:06 compute-0 podman[100439]: 2025-11-26 11:40:06.673780234 +0000 UTC m=+0.543873720 container died 74bcc955997261a52850aa96ec091ab2d9e733d0dfc9ed4490296c6dc2e7b123 (image=quay.io/ceph/ceph:v18, name=upbeat_lichterman, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 26 11:40:06 compute-0 sudo[100634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:40:06 compute-0 sudo[100634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:40:06 compute-0 sudo[100634]: pam_unix(sudo:session): session closed for user root
Nov 26 11:40:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3c49f422a8978ac8d7ac13283b7d6b810be00594d8b14a38fe25641e738031a-merged.mount: Deactivated successfully.
Nov 26 11:40:06 compute-0 podman[100439]: 2025-11-26 11:40:06.703218114 +0000 UTC m=+0.573311590 container remove 74bcc955997261a52850aa96ec091ab2d9e733d0dfc9ed4490296c6dc2e7b123 (image=quay.io/ceph/ceph:v18, name=upbeat_lichterman, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 26 11:40:06 compute-0 systemd[1]: libpod-conmon-74bcc955997261a52850aa96ec091ab2d9e733d0dfc9ed4490296c6dc2e7b123.scope: Deactivated successfully.
Nov 26 11:40:06 compute-0 sudo[100420]: pam_unix(sudo:session): session closed for user root
Nov 26 11:40:06 compute-0 sudo[100667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 26 11:40:06 compute-0 sudo[100667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:40:06 compute-0 podman[100727]: 2025-11-26 11:40:06.958098835 +0000 UTC m=+0.028748692 container create 80c184e9e138e65583b41157f381bcca37bfac0f117c7f864ab92c7d73c0e74b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_easley, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:40:06 compute-0 systemd[1]: Started libpod-conmon-80c184e9e138e65583b41157f381bcca37bfac0f117c7f864ab92c7d73c0e74b.scope.
Nov 26 11:40:07 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:40:07 compute-0 podman[100727]: 2025-11-26 11:40:07.007821059 +0000 UTC m=+0.078470926 container init 80c184e9e138e65583b41157f381bcca37bfac0f117c7f864ab92c7d73c0e74b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 26 11:40:07 compute-0 podman[100727]: 2025-11-26 11:40:07.012234294 +0000 UTC m=+0.082884151 container start 80c184e9e138e65583b41157f381bcca37bfac0f117c7f864ab92c7d73c0e74b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:40:07 compute-0 podman[100727]: 2025-11-26 11:40:07.013187372 +0000 UTC m=+0.083837229 container attach 80c184e9e138e65583b41157f381bcca37bfac0f117c7f864ab92c7d73c0e74b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_easley, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:40:07 compute-0 optimistic_easley[100741]: 167 167
Nov 26 11:40:07 compute-0 systemd[1]: libpod-80c184e9e138e65583b41157f381bcca37bfac0f117c7f864ab92c7d73c0e74b.scope: Deactivated successfully.
Nov 26 11:40:07 compute-0 podman[100727]: 2025-11-26 11:40:07.015797886 +0000 UTC m=+0.086447744 container died 80c184e9e138e65583b41157f381bcca37bfac0f117c7f864ab92c7d73c0e74b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_easley, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 26 11:40:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e3f6d79d783ea3d5a386793cb26da73d4f29fe823a929fbdec3e6099f5bdda3-merged.mount: Deactivated successfully.
Nov 26 11:40:07 compute-0 podman[100727]: 2025-11-26 11:40:07.037367174 +0000 UTC m=+0.108017021 container remove 80c184e9e138e65583b41157f381bcca37bfac0f117c7f864ab92c7d73c0e74b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_easley, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 26 11:40:07 compute-0 podman[100727]: 2025-11-26 11:40:06.946881494 +0000 UTC m=+0.017531361 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:40:07 compute-0 systemd[1]: libpod-conmon-80c184e9e138e65583b41157f381bcca37bfac0f117c7f864ab92c7d73c0e74b.scope: Deactivated successfully.
Nov 26 11:40:07 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Nov 26 11:40:07 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Nov 26 11:40:07 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Nov 26 11:40:07 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Nov 26 11:40:07 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2607986314' entity='client.rgw.rgw.compute-0.oyquem' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 26 11:40:07 compute-0 ceph-mon[74928]: osdmap e33: 3 total, 3 up, 3 in
Nov 26 11:40:07 compute-0 ceph-mon[74928]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 26 11:40:07 compute-0 ceph-mon[74928]: 3.10 scrub starts
Nov 26 11:40:07 compute-0 ceph-mon[74928]: 3.10 scrub ok
Nov 26 11:40:07 compute-0 ceph-mon[74928]: mds.? [v2:192.168.122.100:6814/2321421268,v1:192.168.122.100:6815/2321421268] up:boot
Nov 26 11:40:07 compute-0 ceph-mon[74928]: daemon mds.cephfs.compute-0.hvqwax assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Nov 26 11:40:07 compute-0 ceph-mon[74928]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Nov 26 11:40:07 compute-0 ceph-mon[74928]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Nov 26 11:40:07 compute-0 ceph-mon[74928]: fsmap cephfs:0 1 up:standby
Nov 26 11:40:07 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.hvqwax"}]: dispatch
Nov 26 11:40:07 compute-0 ceph-mon[74928]: fsmap cephfs:1 {0=cephfs.compute-0.hvqwax=up:creating}
Nov 26 11:40:07 compute-0 ceph-mon[74928]: daemon mds.cephfs.compute-0.hvqwax is now active in filesystem cephfs as rank 0
Nov 26 11:40:07 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:40:07 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:40:07 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:40:07 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:40:07 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 11:40:07 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:40:07 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 11:40:07 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 11:40:07 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:40:07 compute-0 ceph-mon[74928]: from='client.14267 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 26 11:40:07 compute-0 ceph-mon[74928]: osdmap e34: 3 total, 3 up, 3 in
Nov 26 11:40:07 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/2607986314' entity='client.rgw.rgw.compute-0.oyquem' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 26 11:40:07 compute-0 podman[100763]: 2025-11-26 11:40:07.156135115 +0000 UTC m=+0.031352343 container create a2d04c5a0e8d0a691bbdda340cf40e6601012955eb624228190a217e66d6b1a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_joliot, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:40:07 compute-0 systemd[1]: Started libpod-conmon-a2d04c5a0e8d0a691bbdda340cf40e6601012955eb624228190a217e66d6b1a2.scope.
Nov 26 11:40:07 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:40:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8af946f9dc05fca05c2abbd9374371759d4a5ad7b94f1be74acfdf305aea5f02/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:40:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8af946f9dc05fca05c2abbd9374371759d4a5ad7b94f1be74acfdf305aea5f02/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:40:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8af946f9dc05fca05c2abbd9374371759d4a5ad7b94f1be74acfdf305aea5f02/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:40:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8af946f9dc05fca05c2abbd9374371759d4a5ad7b94f1be74acfdf305aea5f02/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:40:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8af946f9dc05fca05c2abbd9374371759d4a5ad7b94f1be74acfdf305aea5f02/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 11:40:07 compute-0 podman[100763]: 2025-11-26 11:40:07.202260364 +0000 UTC m=+0.077477623 container init a2d04c5a0e8d0a691bbdda340cf40e6601012955eb624228190a217e66d6b1a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:40:07 compute-0 podman[100763]: 2025-11-26 11:40:07.207670009 +0000 UTC m=+0.082887247 container start a2d04c5a0e8d0a691bbdda340cf40e6601012955eb624228190a217e66d6b1a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_joliot, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True)
Nov 26 11:40:07 compute-0 podman[100763]: 2025-11-26 11:40:07.208780143 +0000 UTC m=+0.083997381 container attach a2d04c5a0e8d0a691bbdda340cf40e6601012955eb624228190a217e66d6b1a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_joliot, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:40:07 compute-0 podman[100763]: 2025-11-26 11:40:07.139869622 +0000 UTC m=+0.015086880 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:40:07 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Nov 26 11:40:07 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 34 pg[9.0( empty local-lis/les=0/0 n=0 ec=34/34 lis/c=0/0 les/c/f=0/0/0 sis=34) [1] r=0 lpr=34 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:07 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Nov 26 11:40:07 compute-0 sudo[100804]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oayzdbjiyhoigxybmmdvofccuomhpprf ; /usr/bin/python3'
Nov 26 11:40:07 compute-0 sudo[100804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:40:07 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v68: 133 pgs: 2 unknown, 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:40:07 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).mds e5 new map
Nov 26 11:40:07 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).mds e5 print_map
                                           e5
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-26T11:39:55.253383+0000
                                           modified        2025-11-26T11:40:07.398416+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=14265}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           [mds.cephfs.compute-0.hvqwax{0:14265} state up:active seq 2 join_fscid=1 addr [v2:192.168.122.100:6814/2321421268,v1:192.168.122.100:6815/2321421268] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
Nov 26 11:40:07 compute-0 ceph-mds[100145]: mds.cephfs.compute-0.hvqwax Updating MDS map to version 5 from mon.0
Nov 26 11:40:07 compute-0 ceph-mds[100145]: mds.0.4 handle_mds_map i am now mds.0.4
Nov 26 11:40:07 compute-0 ceph-mds[100145]: mds.0.4 handle_mds_map state change up:creating --> up:active
Nov 26 11:40:07 compute-0 ceph-mds[100145]: mds.0.4 recovery_done -- successful recovery!
Nov 26 11:40:07 compute-0 ceph-mds[100145]: mds.0.4 active_start
Nov 26 11:40:07 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/2321421268,v1:192.168.122.100:6815/2321421268] up:active
Nov 26 11:40:07 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.hvqwax=up:active}
Nov 26 11:40:07 compute-0 python3[100806]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:40:07 compute-0 podman[100809]: 2025-11-26 11:40:07.500859805 +0000 UTC m=+0.027106063 container create 7c9a583612bbfdaff86580dc44994f9ac9e5639b337d90824ae5e1ac0d81280c (image=quay.io/ceph/ceph:v18, name=gifted_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Nov 26 11:40:07 compute-0 systemd[1]: Started libpod-conmon-7c9a583612bbfdaff86580dc44994f9ac9e5639b337d90824ae5e1ac0d81280c.scope.
Nov 26 11:40:07 compute-0 ansible-async_wrapper.py[99246]: Done in kid B.
Nov 26 11:40:07 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:40:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4450c88102a6e2d1ebceca406b88a09d84dd91010da7dd1c1f55c9399cd5a03f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:40:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4450c88102a6e2d1ebceca406b88a09d84dd91010da7dd1c1f55c9399cd5a03f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:40:07 compute-0 podman[100809]: 2025-11-26 11:40:07.545935875 +0000 UTC m=+0.072182133 container init 7c9a583612bbfdaff86580dc44994f9ac9e5639b337d90824ae5e1ac0d81280c (image=quay.io/ceph/ceph:v18, name=gifted_darwin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 26 11:40:07 compute-0 podman[100809]: 2025-11-26 11:40:07.550701925 +0000 UTC m=+0.076948185 container start 7c9a583612bbfdaff86580dc44994f9ac9e5639b337d90824ae5e1ac0d81280c (image=quay.io/ceph/ceph:v18, name=gifted_darwin, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 26 11:40:07 compute-0 podman[100809]: 2025-11-26 11:40:07.551853849 +0000 UTC m=+0.078100097 container attach 7c9a583612bbfdaff86580dc44994f9ac9e5639b337d90824ae5e1ac0d81280c (image=quay.io/ceph/ceph:v18, name=gifted_darwin, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:40:07 compute-0 podman[100809]: 2025-11-26 11:40:07.489590236 +0000 UTC m=+0.015836515 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:40:07 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Nov 26 11:40:07 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Nov 26 11:40:07 compute-0 ceph-mgr[75197]: log_channel(audit) log [DBG] : from='client.14269 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 26 11:40:07 compute-0 gifted_darwin[100821]: 
Nov 26 11:40:07 compute-0 gifted_darwin[100821]: [{"container_id": "1abf78bcbb62", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "0.49%", "created": "2025-11-26T11:38:59.607607Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2025-11-26T11:38:59.643318Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-26T11:40:06.541046Z", "memory_usage": 11649679, "ports": [], "service_name": "crash", "started": "2025-11-26T11:38:59.533788Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7@crash.compute-0", "version": "18.2.7"}, {"container_id": "022a553e1181", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "7.53%", "created": "2025-11-26T11:40:05.363346Z", "daemon_id": "cephfs.compute-0.hvqwax", "daemon_name": "mds.cephfs.compute-0.hvqwax", "daemon_type": "mds", "events": ["2025-11-26T11:40:05.397082Z daemon:mds.cephfs.compute-0.hvqwax [INFO] \"Deployed mds.cephfs.compute-0.hvqwax on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-26T11:40:06.541310Z", "memory_usage": 12740198, "ports": [], "service_name": "mds.cephfs", "started": "2025-11-26T11:40:05.290949Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7@mds.cephfs.compute-0.hvqwax", "version": "18.2.7"}, {"container_id": "bb7060fb261e", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "31.24%", "created": "2025-11-26T11:38:02.981087Z", "daemon_id": "compute-0.mwrktr", "daemon_name": "mgr.compute-0.mwrktr", "daemon_type": "mgr", "events": ["2025-11-26T11:39:02.891518Z daemon:mgr.compute-0.mwrktr [INFO] \"Reconfigured mgr.compute-0.mwrktr on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-26T11:40:06.540990Z", "memory_usage": 550712115, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-11-26T11:38:02.921459Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7@mgr.compute-0.mwrktr", "version": "18.2.7"}, {"container_id": "810eaed6cbde", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "1.69%", "created": "2025-11-26T11:37:59.413047Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2025-11-26T11:39:02.400665Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-26T11:40:06.540915Z", "memory_request": 2147483648, "memory_usage": 41376808, "ports": [], "service_name": "mon", "started": "2025-11-26T11:38:01.403174Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7@mon.compute-0", "version": "18.2.7"}, {"container_id": "9ab3606df1c8", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.14%", "created": "2025-11-26T11:39:19.853079Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2025-11-26T11:39:19.882655Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-26T11:40:06.541104Z", "memory_request": 4294967296, "memory_usage": 61813555, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-11-26T11:39:19.802081Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7@osd.0", "version": "18.2.7"}, {"container_id": "6de475353062", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.28%", "created": "2025-11-26T11:39:23.250198Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2025-11-26T11:39:23.350225Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-26T11:40:06.541158Z", "memory_request": 4294967296, "memory_usage": 61855498, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-11-26T11:39:23.084994Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7@osd.1", "version": "18.2.7"}, {"container_id": "6271fc17f190", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.44%", "created": "2025-11-26T11:39:26.872324Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2025-11-26T11:39:26.956448Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-26T11:40:06.541208Z", "memory_request": 4294967296, "memory_usage": 65756200, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-11-26T11:39:26.695836Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7@osd.2", "version": "18.2.7"}, {"container_id": "055a3b698c56", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "2.59%", "created": "2025-11-26T11:40:04.035331Z", "daemon_id": "rgw.compute-0.oyquem", "daemon_name": "rgw.rgw.compute-0.oyquem", "daemon_type": "rgw", "events": ["2025-11-26T11:40:04.085382Z daemon:rgw.rgw.compute-0.oyquem [INFO] \"Deployed rgw.rgw.compute-0.oyquem on host 'compute-0'\""], "hostname": "compute-0", "ip": "192.168.122.100", "is_active": false, "last_refresh": "2025-11-26T11:40:06.541260Z", "memory_usage": 18664652, "ports": [8082], "service_name": "rgw.rgw", "started": "2025-11-26T11:40:03.969426Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7@rgw.rgw.compute-0.oyquem", "version": "18.2.7"}]
Nov 26 11:40:07 compute-0 systemd[1]: libpod-7c9a583612bbfdaff86580dc44994f9ac9e5639b337d90824ae5e1ac0d81280c.scope: Deactivated successfully.
Nov 26 11:40:08 compute-0 podman[100866]: 2025-11-26 11:40:08.024210188 +0000 UTC m=+0.018606068 container died 7c9a583612bbfdaff86580dc44994f9ac9e5639b337d90824ae5e1ac0d81280c (image=quay.io/ceph/ceph:v18, name=gifted_darwin, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True)
Nov 26 11:40:08 compute-0 rsyslogd[960]: message too long (8588) with configured size 8096, begin of message is: [{"container_id": "1abf78bcbb62", "container_image_digests": ["quay.io/ceph/ceph [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 26 11:40:08 compute-0 tender_joliot[100776]: --> passed data devices: 0 physical, 3 LVM
Nov 26 11:40:08 compute-0 tender_joliot[100776]: --> relative data size: 1.0
Nov 26 11:40:08 compute-0 tender_joliot[100776]: --> All data devices are unavailable
Nov 26 11:40:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-4450c88102a6e2d1ebceca406b88a09d84dd91010da7dd1c1f55c9399cd5a03f-merged.mount: Deactivated successfully.
Nov 26 11:40:08 compute-0 podman[100866]: 2025-11-26 11:40:08.04501249 +0000 UTC m=+0.039408360 container remove 7c9a583612bbfdaff86580dc44994f9ac9e5639b337d90824ae5e1ac0d81280c (image=quay.io/ceph/ceph:v18, name=gifted_darwin, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:40:08 compute-0 systemd[1]: libpod-conmon-7c9a583612bbfdaff86580dc44994f9ac9e5639b337d90824ae5e1ac0d81280c.scope: Deactivated successfully.
Nov 26 11:40:08 compute-0 systemd[1]: libpod-a2d04c5a0e8d0a691bbdda340cf40e6601012955eb624228190a217e66d6b1a2.scope: Deactivated successfully.
Nov 26 11:40:08 compute-0 podman[100763]: 2025-11-26 11:40:08.058112411 +0000 UTC m=+0.933329649 container died a2d04c5a0e8d0a691bbdda340cf40e6601012955eb624228190a217e66d6b1a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_joliot, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 26 11:40:08 compute-0 sudo[100804]: pam_unix(sudo:session): session closed for user root
Nov 26 11:40:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-8af946f9dc05fca05c2abbd9374371759d4a5ad7b94f1be74acfdf305aea5f02-merged.mount: Deactivated successfully.
Nov 26 11:40:08 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Nov 26 11:40:08 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2607986314' entity='client.rgw.rgw.compute-0.oyquem' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 26 11:40:08 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Nov 26 11:40:08 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Nov 26 11:40:08 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 35 pg[9.0( empty local-lis/les=34/35 n=0 ec=34/34 lis/c=0/0 les/c/f=0/0/0 sis=34) [1] r=0 lpr=34 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:08 compute-0 podman[100763]: 2025-11-26 11:40:08.086523836 +0000 UTC m=+0.961741074 container remove a2d04c5a0e8d0a691bbdda340cf40e6601012955eb624228190a217e66d6b1a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_joliot, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:40:08 compute-0 systemd[1]: libpod-conmon-a2d04c5a0e8d0a691bbdda340cf40e6601012955eb624228190a217e66d6b1a2.scope: Deactivated successfully.
Nov 26 11:40:08 compute-0 sudo[100667]: pam_unix(sudo:session): session closed for user root
Nov 26 11:40:08 compute-0 sudo[100894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:40:08 compute-0 sudo[100894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:40:08 compute-0 sudo[100894]: pam_unix(sudo:session): session closed for user root
Nov 26 11:40:08 compute-0 sudo[100919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:40:08 compute-0 sudo[100919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:40:08 compute-0 sudo[100919]: pam_unix(sudo:session): session closed for user root
Nov 26 11:40:08 compute-0 sudo[100944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:40:08 compute-0 sudo[100944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:40:08 compute-0 sudo[100944]: pam_unix(sudo:session): session closed for user root
Nov 26 11:40:08 compute-0 sudo[100969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -- lvm list --format json
Nov 26 11:40:08 compute-0 sudo[100969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:40:08 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Nov 26 11:40:08 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Nov 26 11:40:08 compute-0 ceph-mon[74928]: 3.13 scrub starts
Nov 26 11:40:08 compute-0 ceph-mon[74928]: 3.13 scrub ok
Nov 26 11:40:08 compute-0 ceph-mon[74928]: pgmap v68: 133 pgs: 2 unknown, 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:40:08 compute-0 ceph-mon[74928]: mds.? [v2:192.168.122.100:6814/2321421268,v1:192.168.122.100:6815/2321421268] up:active
Nov 26 11:40:08 compute-0 ceph-mon[74928]: fsmap cephfs:1 {0=cephfs.compute-0.hvqwax=up:active}
Nov 26 11:40:08 compute-0 ceph-mon[74928]: 4.15 scrub starts
Nov 26 11:40:08 compute-0 ceph-mon[74928]: 4.15 scrub ok
Nov 26 11:40:08 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/2607986314' entity='client.rgw.rgw.compute-0.oyquem' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 26 11:40:08 compute-0 ceph-mon[74928]: osdmap e35: 3 total, 3 up, 3 in
Nov 26 11:40:08 compute-0 podman[101024]: 2025-11-26 11:40:08.48978676 +0000 UTC m=+0.026042657 container create 7d03a0da4d6286ffedfee73ae7e078fd21b994ddc61f4f59b8af976b3c29ada8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bardeen, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:40:08 compute-0 systemd[1]: Started libpod-conmon-7d03a0da4d6286ffedfee73ae7e078fd21b994ddc61f4f59b8af976b3c29ada8.scope.
Nov 26 11:40:08 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:40:08 compute-0 podman[101024]: 2025-11-26 11:40:08.537827874 +0000 UTC m=+0.074083790 container init 7d03a0da4d6286ffedfee73ae7e078fd21b994ddc61f4f59b8af976b3c29ada8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Nov 26 11:40:08 compute-0 podman[101024]: 2025-11-26 11:40:08.542619313 +0000 UTC m=+0.078875209 container start 7d03a0da4d6286ffedfee73ae7e078fd21b994ddc61f4f59b8af976b3c29ada8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 26 11:40:08 compute-0 relaxed_bardeen[101038]: 167 167
Nov 26 11:40:08 compute-0 podman[101024]: 2025-11-26 11:40:08.544661174 +0000 UTC m=+0.080917091 container attach 7d03a0da4d6286ffedfee73ae7e078fd21b994ddc61f4f59b8af976b3c29ada8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:40:08 compute-0 systemd[1]: libpod-7d03a0da4d6286ffedfee73ae7e078fd21b994ddc61f4f59b8af976b3c29ada8.scope: Deactivated successfully.
Nov 26 11:40:08 compute-0 podman[101024]: 2025-11-26 11:40:08.55511318 +0000 UTC m=+0.091369118 container died 7d03a0da4d6286ffedfee73ae7e078fd21b994ddc61f4f59b8af976b3c29ada8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bardeen, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 26 11:40:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf4e90edc45b537f8a6f4acea3b3e736d3c7f16b577634776094872fb9c2fe34-merged.mount: Deactivated successfully.
Nov 26 11:40:08 compute-0 podman[101024]: 2025-11-26 11:40:08.573601085 +0000 UTC m=+0.109856982 container remove 7d03a0da4d6286ffedfee73ae7e078fd21b994ddc61f4f59b8af976b3c29ada8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bardeen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Nov 26 11:40:08 compute-0 podman[101024]: 2025-11-26 11:40:08.47937555 +0000 UTC m=+0.015631467 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:40:08 compute-0 systemd[1]: libpod-conmon-7d03a0da4d6286ffedfee73ae7e078fd21b994ddc61f4f59b8af976b3c29ada8.scope: Deactivated successfully.
Nov 26 11:40:08 compute-0 podman[101060]: 2025-11-26 11:40:08.679912448 +0000 UTC m=+0.025535299 container create be58c123e12b200796e9969c36209ed94eee85c0a07ee8183b2f0eb2ed7ca00a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:40:08 compute-0 systemd[1]: Started libpod-conmon-be58c123e12b200796e9969c36209ed94eee85c0a07ee8183b2f0eb2ed7ca00a.scope.
Nov 26 11:40:08 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:40:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a55b936c2dd84caec82f5438dabdaaba00291a89dddcab451322d0e145496655/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:40:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a55b936c2dd84caec82f5438dabdaaba00291a89dddcab451322d0e145496655/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:40:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a55b936c2dd84caec82f5438dabdaaba00291a89dddcab451322d0e145496655/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:40:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a55b936c2dd84caec82f5438dabdaaba00291a89dddcab451322d0e145496655/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:40:08 compute-0 sudo[101100]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzoqjvfouisihlaotgsccdeytbhmvxow ; /usr/bin/python3'
Nov 26 11:40:08 compute-0 podman[101060]: 2025-11-26 11:40:08.735978529 +0000 UTC m=+0.081601390 container init be58c123e12b200796e9969c36209ed94eee85c0a07ee8183b2f0eb2ed7ca00a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hermann, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:40:08 compute-0 sudo[101100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:40:08 compute-0 podman[101060]: 2025-11-26 11:40:08.741499775 +0000 UTC m=+0.087122627 container start be58c123e12b200796e9969c36209ed94eee85c0a07ee8183b2f0eb2ed7ca00a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hermann, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:40:08 compute-0 podman[101060]: 2025-11-26 11:40:08.742788186 +0000 UTC m=+0.088411036 container attach be58c123e12b200796e9969c36209ed94eee85c0a07ee8183b2f0eb2ed7ca00a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hermann, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:40:08 compute-0 podman[101060]: 2025-11-26 11:40:08.669651492 +0000 UTC m=+0.015274364 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:40:08 compute-0 python3[101103]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:40:08 compute-0 podman[101105]: 2025-11-26 11:40:08.889734031 +0000 UTC m=+0.028224303 container create 48af0704864c839cc017711ecfeb7313ffa4c0e5ba54d0baee5e66d8f3ee8f15 (image=quay.io/ceph/ceph:v18, name=tender_lalande, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:40:08 compute-0 systemd[1]: Started libpod-conmon-48af0704864c839cc017711ecfeb7313ffa4c0e5ba54d0baee5e66d8f3ee8f15.scope.
Nov 26 11:40:08 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:40:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed218810b544d274177dca3bd9d7e1a159b24d18685842c3cd6018283c3cddae/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:40:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed218810b544d274177dca3bd9d7e1a159b24d18685842c3cd6018283c3cddae/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:40:08 compute-0 podman[101105]: 2025-11-26 11:40:08.946274527 +0000 UTC m=+0.084764819 container init 48af0704864c839cc017711ecfeb7313ffa4c0e5ba54d0baee5e66d8f3ee8f15 (image=quay.io/ceph/ceph:v18, name=tender_lalande, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:40:08 compute-0 podman[101105]: 2025-11-26 11:40:08.952018813 +0000 UTC m=+0.090509085 container start 48af0704864c839cc017711ecfeb7313ffa4c0e5ba54d0baee5e66d8f3ee8f15 (image=quay.io/ceph/ceph:v18, name=tender_lalande, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 26 11:40:08 compute-0 podman[101105]: 2025-11-26 11:40:08.953014701 +0000 UTC m=+0.091504973 container attach 48af0704864c839cc017711ecfeb7313ffa4c0e5ba54d0baee5e66d8f3ee8f15 (image=quay.io/ceph/ceph:v18, name=tender_lalande, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:40:08 compute-0 podman[101105]: 2025-11-26 11:40:08.878139679 +0000 UTC m=+0.016629971 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:40:09 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Nov 26 11:40:09 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Nov 26 11:40:09 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Nov 26 11:40:09 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Nov 26 11:40:09 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2607986314' entity='client.rgw.rgw.compute-0.oyquem' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 26 11:40:09 compute-0 competent_hermann[101079]: {
Nov 26 11:40:09 compute-0 competent_hermann[101079]:     "0": [
Nov 26 11:40:09 compute-0 competent_hermann[101079]:         {
Nov 26 11:40:09 compute-0 competent_hermann[101079]:             "devices": [
Nov 26 11:40:09 compute-0 competent_hermann[101079]:                 "/dev/loop3"
Nov 26 11:40:09 compute-0 competent_hermann[101079]:             ],
Nov 26 11:40:09 compute-0 competent_hermann[101079]:             "lv_name": "ceph_lv0",
Nov 26 11:40:09 compute-0 competent_hermann[101079]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:40:09 compute-0 competent_hermann[101079]:             "lv_size": "21470642176",
Nov 26 11:40:09 compute-0 competent_hermann[101079]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a9ad59a0-aa2e-4d92-b571-519d2d145b6a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:40:09 compute-0 competent_hermann[101079]:             "lv_uuid": "Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ",
Nov 26 11:40:09 compute-0 competent_hermann[101079]:             "name": "ceph_lv0",
Nov 26 11:40:09 compute-0 competent_hermann[101079]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:40:09 compute-0 competent_hermann[101079]:             "tags": {
Nov 26 11:40:09 compute-0 competent_hermann[101079]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:40:09 compute-0 competent_hermann[101079]:                 "ceph.block_uuid": "Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ",
Nov 26 11:40:09 compute-0 competent_hermann[101079]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:40:09 compute-0 competent_hermann[101079]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:40:09 compute-0 competent_hermann[101079]:                 "ceph.cluster_name": "ceph",
Nov 26 11:40:09 compute-0 competent_hermann[101079]:                 "ceph.crush_device_class": "",
Nov 26 11:40:09 compute-0 competent_hermann[101079]:                 "ceph.encrypted": "0",
Nov 26 11:40:09 compute-0 competent_hermann[101079]:                 "ceph.osd_fsid": "a9ad59a0-aa2e-4d92-b571-519d2d145b6a",
Nov 26 11:40:09 compute-0 competent_hermann[101079]:                 "ceph.osd_id": "0",
Nov 26 11:40:09 compute-0 competent_hermann[101079]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:40:09 compute-0 competent_hermann[101079]:                 "ceph.type": "block",
Nov 26 11:40:09 compute-0 competent_hermann[101079]:                 "ceph.vdo": "0"
Nov 26 11:40:09 compute-0 competent_hermann[101079]:             },
Nov 26 11:40:09 compute-0 competent_hermann[101079]:             "type": "block",
Nov 26 11:40:09 compute-0 competent_hermann[101079]:             "vg_name": "ceph_vg0"
Nov 26 11:40:09 compute-0 competent_hermann[101079]:         }
Nov 26 11:40:09 compute-0 competent_hermann[101079]:     ],
Nov 26 11:40:09 compute-0 competent_hermann[101079]:     "1": [
Nov 26 11:40:09 compute-0 competent_hermann[101079]:         {
Nov 26 11:40:09 compute-0 competent_hermann[101079]:             "devices": [
Nov 26 11:40:09 compute-0 competent_hermann[101079]:                 "/dev/loop4"
Nov 26 11:40:09 compute-0 competent_hermann[101079]:             ],
Nov 26 11:40:09 compute-0 competent_hermann[101079]:             "lv_name": "ceph_lv1",
Nov 26 11:40:09 compute-0 competent_hermann[101079]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:40:09 compute-0 competent_hermann[101079]:             "lv_size": "21470642176",
Nov 26 11:40:09 compute-0 competent_hermann[101079]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2627095b-eef8-4027-bfef-68bf7cb6801f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:40:09 compute-0 competent_hermann[101079]:             "lv_uuid": "T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS",
Nov 26 11:40:09 compute-0 competent_hermann[101079]:             "name": "ceph_lv1",
Nov 26 11:40:09 compute-0 competent_hermann[101079]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:40:09 compute-0 competent_hermann[101079]:             "tags": {
Nov 26 11:40:09 compute-0 competent_hermann[101079]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:40:09 compute-0 competent_hermann[101079]:                 "ceph.block_uuid": "T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS",
Nov 26 11:40:09 compute-0 competent_hermann[101079]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:40:09 compute-0 competent_hermann[101079]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:40:09 compute-0 competent_hermann[101079]:                 "ceph.cluster_name": "ceph",
Nov 26 11:40:09 compute-0 competent_hermann[101079]:                 "ceph.crush_device_class": "",
Nov 26 11:40:09 compute-0 competent_hermann[101079]:                 "ceph.encrypted": "0",
Nov 26 11:40:09 compute-0 competent_hermann[101079]:                 "ceph.osd_fsid": "2627095b-eef8-4027-bfef-68bf7cb6801f",
Nov 26 11:40:09 compute-0 competent_hermann[101079]:                 "ceph.osd_id": "1",
Nov 26 11:40:09 compute-0 competent_hermann[101079]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:40:09 compute-0 competent_hermann[101079]:                 "ceph.type": "block",
Nov 26 11:40:09 compute-0 competent_hermann[101079]:                 "ceph.vdo": "0"
Nov 26 11:40:09 compute-0 competent_hermann[101079]:             },
Nov 26 11:40:09 compute-0 competent_hermann[101079]:             "type": "block",
Nov 26 11:40:09 compute-0 competent_hermann[101079]:             "vg_name": "ceph_vg1"
Nov 26 11:40:09 compute-0 competent_hermann[101079]:         }
Nov 26 11:40:09 compute-0 competent_hermann[101079]:     ],
Nov 26 11:40:09 compute-0 competent_hermann[101079]:     "2": [
Nov 26 11:40:09 compute-0 competent_hermann[101079]:         {
Nov 26 11:40:09 compute-0 competent_hermann[101079]:             "devices": [
Nov 26 11:40:09 compute-0 competent_hermann[101079]:                 "/dev/loop5"
Nov 26 11:40:09 compute-0 competent_hermann[101079]:             ],
Nov 26 11:40:09 compute-0 competent_hermann[101079]:             "lv_name": "ceph_lv2",
Nov 26 11:40:09 compute-0 competent_hermann[101079]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:40:09 compute-0 competent_hermann[101079]:             "lv_size": "21470642176",
Nov 26 11:40:09 compute-0 competent_hermann[101079]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d56156fb-7361-4bef-b06b-1320109b4323,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:40:09 compute-0 competent_hermann[101079]:             "lv_uuid": "PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF",
Nov 26 11:40:09 compute-0 competent_hermann[101079]:             "name": "ceph_lv2",
Nov 26 11:40:09 compute-0 competent_hermann[101079]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:40:09 compute-0 competent_hermann[101079]:             "tags": {
Nov 26 11:40:09 compute-0 competent_hermann[101079]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:40:09 compute-0 competent_hermann[101079]:                 "ceph.block_uuid": "PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF",
Nov 26 11:40:09 compute-0 competent_hermann[101079]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:40:09 compute-0 competent_hermann[101079]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:40:09 compute-0 competent_hermann[101079]:                 "ceph.cluster_name": "ceph",
Nov 26 11:40:09 compute-0 competent_hermann[101079]:                 "ceph.crush_device_class": "",
Nov 26 11:40:09 compute-0 competent_hermann[101079]:                 "ceph.encrypted": "0",
Nov 26 11:40:09 compute-0 competent_hermann[101079]:                 "ceph.osd_fsid": "d56156fb-7361-4bef-b06b-1320109b4323",
Nov 26 11:40:09 compute-0 competent_hermann[101079]:                 "ceph.osd_id": "2",
Nov 26 11:40:09 compute-0 competent_hermann[101079]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:40:09 compute-0 competent_hermann[101079]:                 "ceph.type": "block",
Nov 26 11:40:09 compute-0 competent_hermann[101079]:                 "ceph.vdo": "0"
Nov 26 11:40:09 compute-0 competent_hermann[101079]:             },
Nov 26 11:40:09 compute-0 competent_hermann[101079]:             "type": "block",
Nov 26 11:40:09 compute-0 competent_hermann[101079]:             "vg_name": "ceph_vg2"
Nov 26 11:40:09 compute-0 competent_hermann[101079]:         }
Nov 26 11:40:09 compute-0 competent_hermann[101079]:     ]
Nov 26 11:40:09 compute-0 competent_hermann[101079]: }
Nov 26 11:40:09 compute-0 systemd[1]: libpod-be58c123e12b200796e9969c36209ed94eee85c0a07ee8183b2f0eb2ed7ca00a.scope: Deactivated successfully.
Nov 26 11:40:09 compute-0 podman[101060]: 2025-11-26 11:40:09.373483122 +0000 UTC m=+0.719105974 container died be58c123e12b200796e9969c36209ed94eee85c0a07ee8183b2f0eb2ed7ca00a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Nov 26 11:40:09 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v71: 134 pgs: 1 unknown, 133 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 1023 B/s wr, 1 op/s
Nov 26 11:40:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-a55b936c2dd84caec82f5438dabdaaba00291a89dddcab451322d0e145496655-merged.mount: Deactivated successfully.
Nov 26 11:40:09 compute-0 ceph-mon[74928]: from='client.14269 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 26 11:40:09 compute-0 ceph-mon[74928]: 3.14 scrub starts
Nov 26 11:40:09 compute-0 ceph-mon[74928]: 3.14 scrub ok
Nov 26 11:40:09 compute-0 ceph-mon[74928]: osdmap e36: 3 total, 3 up, 3 in
Nov 26 11:40:09 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/2607986314' entity='client.rgw.rgw.compute-0.oyquem' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 26 11:40:09 compute-0 podman[101060]: 2025-11-26 11:40:09.407081591 +0000 UTC m=+0.752704442 container remove be58c123e12b200796e9969c36209ed94eee85c0a07ee8183b2f0eb2ed7ca00a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 26 11:40:09 compute-0 systemd[1]: libpod-conmon-be58c123e12b200796e9969c36209ed94eee85c0a07ee8183b2f0eb2ed7ca00a.scope: Deactivated successfully.
Nov 26 11:40:09 compute-0 sudo[100969]: pam_unix(sudo:session): session closed for user root
Nov 26 11:40:09 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 26 11:40:09 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1252557621' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 26 11:40:09 compute-0 tender_lalande[101117]: 
Nov 26 11:40:09 compute-0 tender_lalande[101117]: {"fsid":"ebab460c-3fd7-5f66-aa87-e10c143123f7","health":{"status":"HEALTH_WARN","checks":{"POOL_APP_NOT_ENABLED":{"severity":"HEALTH_WARN","summary":{"message":"1 pool(s) do not have an application enabled","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":127,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":36,"num_osds":3,"num_up_osds":3,"osd_up_since":1764157172,"num_in_osds":3,"osd_in_since":1764157152,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":131},{"state_name":"unknown","count":2}],"num_pgs":133,"num_pools":9,"num_objects":2,"data_bytes":459280,"bytes_used":83800064,"bytes_avail":64328126464,"bytes_total":64411926528,"unknown_pgs_ratio":0.015037594363093376},"fsmap":{"epoch":5,"id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-0.hvqwax","status":"up:active","gid":14265}],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-11-26T11:39:43.377103+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Nov 26 11:40:09 compute-0 systemd[1]: libpod-48af0704864c839cc017711ecfeb7313ffa4c0e5ba54d0baee5e66d8f3ee8f15.scope: Deactivated successfully.
Nov 26 11:40:09 compute-0 podman[101105]: 2025-11-26 11:40:09.455875545 +0000 UTC m=+0.594365837 container died 48af0704864c839cc017711ecfeb7313ffa4c0e5ba54d0baee5e66d8f3ee8f15 (image=quay.io/ceph/ceph:v18, name=tender_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:40:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed218810b544d274177dca3bd9d7e1a159b24d18685842c3cd6018283c3cddae-merged.mount: Deactivated successfully.
Nov 26 11:40:09 compute-0 sudo[101155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:40:09 compute-0 podman[101105]: 2025-11-26 11:40:09.476736837 +0000 UTC m=+0.615227109 container remove 48af0704864c839cc017711ecfeb7313ffa4c0e5ba54d0baee5e66d8f3ee8f15 (image=quay.io/ceph/ceph:v18, name=tender_lalande, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:40:09 compute-0 sudo[101155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:40:09 compute-0 sudo[101155]: pam_unix(sudo:session): session closed for user root
Nov 26 11:40:09 compute-0 systemd[1]: libpod-conmon-48af0704864c839cc017711ecfeb7313ffa4c0e5ba54d0baee5e66d8f3ee8f15.scope: Deactivated successfully.
Nov 26 11:40:09 compute-0 sudo[101100]: pam_unix(sudo:session): session closed for user root
Nov 26 11:40:09 compute-0 sudo[101191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:40:09 compute-0 sudo[101191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:40:09 compute-0 sudo[101191]: pam_unix(sudo:session): session closed for user root
Nov 26 11:40:09 compute-0 sudo[101216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:40:09 compute-0 sudo[101216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:40:09 compute-0 sudo[101216]: pam_unix(sudo:session): session closed for user root
Nov 26 11:40:09 compute-0 sudo[101241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -- raw list --format json
Nov 26 11:40:09 compute-0 sudo[101241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:40:09 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 36 pg[10.0( empty local-lis/les=0/0 n=0 ec=36/36 lis/c=0/0 les/c/f=0/0/0 sis=36) [2] r=0 lpr=36 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:09 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 5.6 deep-scrub starts
Nov 26 11:40:09 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 5.6 deep-scrub ok
Nov 26 11:40:09 compute-0 podman[101297]: 2025-11-26 11:40:09.820592918 +0000 UTC m=+0.026842656 container create 1a265e34a4e93ebf3b2214952ede5631b9708d534cd3f57d30bc66434e0c65b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mclean, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:40:09 compute-0 systemd[1]: Started libpod-conmon-1a265e34a4e93ebf3b2214952ede5631b9708d534cd3f57d30bc66434e0c65b5.scope.
Nov 26 11:40:09 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:40:09 compute-0 podman[101297]: 2025-11-26 11:40:09.865423736 +0000 UTC m=+0.071673494 container init 1a265e34a4e93ebf3b2214952ede5631b9708d534cd3f57d30bc66434e0c65b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mclean, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 26 11:40:09 compute-0 podman[101297]: 2025-11-26 11:40:09.870263987 +0000 UTC m=+0.076513734 container start 1a265e34a4e93ebf3b2214952ede5631b9708d534cd3f57d30bc66434e0c65b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 26 11:40:09 compute-0 podman[101297]: 2025-11-26 11:40:09.871323685 +0000 UTC m=+0.077573434 container attach 1a265e34a4e93ebf3b2214952ede5631b9708d534cd3f57d30bc66434e0c65b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mclean, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 26 11:40:09 compute-0 awesome_mclean[101311]: 167 167
Nov 26 11:40:09 compute-0 systemd[1]: libpod-1a265e34a4e93ebf3b2214952ede5631b9708d534cd3f57d30bc66434e0c65b5.scope: Deactivated successfully.
Nov 26 11:40:09 compute-0 podman[101297]: 2025-11-26 11:40:09.872836389 +0000 UTC m=+0.079086137 container died 1a265e34a4e93ebf3b2214952ede5631b9708d534cd3f57d30bc66434e0c65b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mclean, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:40:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-e55f563ea82281c30eb81d3b43225b58ce4c0f7c6f9c0295de168389d0f561ac-merged.mount: Deactivated successfully.
Nov 26 11:40:09 compute-0 podman[101297]: 2025-11-26 11:40:09.892244159 +0000 UTC m=+0.098493907 container remove 1a265e34a4e93ebf3b2214952ede5631b9708d534cd3f57d30bc66434e0c65b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:40:09 compute-0 podman[101297]: 2025-11-26 11:40:09.80918037 +0000 UTC m=+0.015430138 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:40:09 compute-0 systemd[1]: libpod-conmon-1a265e34a4e93ebf3b2214952ede5631b9708d534cd3f57d30bc66434e0c65b5.scope: Deactivated successfully.
Nov 26 11:40:10 compute-0 podman[101334]: 2025-11-26 11:40:10.003557568 +0000 UTC m=+0.027695867 container create 7e2508d3430d85b670054d7148e400e1fcb04d9dfe7ee196d1b8ed6b6da31a63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_newton, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:40:10 compute-0 systemd[1]: Started libpod-conmon-7e2508d3430d85b670054d7148e400e1fcb04d9dfe7ee196d1b8ed6b6da31a63.scope.
Nov 26 11:40:10 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:40:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e01a06e94cf3ff2b3da7fa3ba75026c2a0ebeb0d6bb6137c0c32f07378c576a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:40:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e01a06e94cf3ff2b3da7fa3ba75026c2a0ebeb0d6bb6137c0c32f07378c576a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:40:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e01a06e94cf3ff2b3da7fa3ba75026c2a0ebeb0d6bb6137c0c32f07378c576a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:40:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e01a06e94cf3ff2b3da7fa3ba75026c2a0ebeb0d6bb6137c0c32f07378c576a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:40:10 compute-0 podman[101334]: 2025-11-26 11:40:10.060631861 +0000 UTC m=+0.084770189 container init 7e2508d3430d85b670054d7148e400e1fcb04d9dfe7ee196d1b8ed6b6da31a63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_newton, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:40:10 compute-0 podman[101334]: 2025-11-26 11:40:10.066111427 +0000 UTC m=+0.090249726 container start 7e2508d3430d85b670054d7148e400e1fcb04d9dfe7ee196d1b8ed6b6da31a63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_newton, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Nov 26 11:40:10 compute-0 podman[101334]: 2025-11-26 11:40:10.068036379 +0000 UTC m=+0.092174677 container attach 7e2508d3430d85b670054d7148e400e1fcb04d9dfe7ee196d1b8ed6b6da31a63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_newton, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 26 11:40:10 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Nov 26 11:40:10 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2607986314' entity='client.rgw.rgw.compute-0.oyquem' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 26 11:40:10 compute-0 podman[101334]: 2025-11-26 11:40:09.991770332 +0000 UTC m=+0.015908651 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:40:10 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Nov 26 11:40:10 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Nov 26 11:40:10 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 37 pg[10.0( empty local-lis/les=36/37 n=0 ec=36/36 lis/c=0/0 les/c/f=0/0/0 sis=36) [2] r=0 lpr=36 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:10 compute-0 sudo[101389]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwcqrdtdddojnbjyfzjxlspdlygejajv ; /usr/bin/python3'
Nov 26 11:40:10 compute-0 sudo[101389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:40:10 compute-0 python3[101391]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:40:10 compute-0 podman[101392]: 2025-11-26 11:40:10.330495074 +0000 UTC m=+0.026648971 container create 599144c6ae30ef2d1604bb75495ca14d0e8d3eb3e359348a23051da845e6910d (image=quay.io/ceph/ceph:v18, name=mystifying_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 26 11:40:10 compute-0 systemd[1]: Started libpod-conmon-599144c6ae30ef2d1604bb75495ca14d0e8d3eb3e359348a23051da845e6910d.scope.
Nov 26 11:40:10 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:40:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b502efaf378cfff46712bc10e42194753d5cf99f3d4742e1e7bb364ce74bfdc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:40:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b502efaf378cfff46712bc10e42194753d5cf99f3d4742e1e7bb364ce74bfdc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:40:10 compute-0 podman[101392]: 2025-11-26 11:40:10.385203174 +0000 UTC m=+0.081357091 container init 599144c6ae30ef2d1604bb75495ca14d0e8d3eb3e359348a23051da845e6910d (image=quay.io/ceph/ceph:v18, name=mystifying_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 26 11:40:10 compute-0 podman[101392]: 2025-11-26 11:40:10.389921514 +0000 UTC m=+0.086075411 container start 599144c6ae30ef2d1604bb75495ca14d0e8d3eb3e359348a23051da845e6910d (image=quay.io/ceph/ceph:v18, name=mystifying_kalam, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:40:10 compute-0 podman[101392]: 2025-11-26 11:40:10.392718731 +0000 UTC m=+0.088872648 container attach 599144c6ae30ef2d1604bb75495ca14d0e8d3eb3e359348a23051da845e6910d (image=quay.io/ceph/ceph:v18, name=mystifying_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 26 11:40:10 compute-0 ceph-mon[74928]: pgmap v71: 134 pgs: 1 unknown, 133 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 1023 B/s wr, 1 op/s
Nov 26 11:40:10 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1252557621' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 26 11:40:10 compute-0 ceph-mon[74928]: 5.6 deep-scrub starts
Nov 26 11:40:10 compute-0 ceph-mon[74928]: 5.6 deep-scrub ok
Nov 26 11:40:10 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/2607986314' entity='client.rgw.rgw.compute-0.oyquem' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 26 11:40:10 compute-0 ceph-mon[74928]: osdmap e37: 3 total, 3 up, 3 in
Nov 26 11:40:10 compute-0 podman[101392]: 2025-11-26 11:40:10.319390537 +0000 UTC m=+0.015544454 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:40:10 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Nov 26 11:40:10 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Nov 26 11:40:10 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 26 11:40:10 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3480678431' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 26 11:40:10 compute-0 mystifying_kalam[101406]: 
Nov 26 11:40:10 compute-0 mystifying_kalam[101406]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"6","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.oyquem","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Nov 26 11:40:10 compute-0 systemd[1]: libpod-599144c6ae30ef2d1604bb75495ca14d0e8d3eb3e359348a23051da845e6910d.scope: Deactivated successfully.
Nov 26 11:40:10 compute-0 podman[101392]: 2025-11-26 11:40:10.824895527 +0000 UTC m=+0.521049424 container died 599144c6ae30ef2d1604bb75495ca14d0e8d3eb3e359348a23051da845e6910d (image=quay.io/ceph/ceph:v18, name=mystifying_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:40:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b502efaf378cfff46712bc10e42194753d5cf99f3d4742e1e7bb364ce74bfdc-merged.mount: Deactivated successfully.
Nov 26 11:40:10 compute-0 hungry_newton[101348]: {
Nov 26 11:40:10 compute-0 hungry_newton[101348]:     "2627095b-eef8-4027-bfef-68bf7cb6801f": {
Nov 26 11:40:10 compute-0 hungry_newton[101348]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:40:10 compute-0 hungry_newton[101348]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 11:40:10 compute-0 hungry_newton[101348]:         "osd_id": 1,
Nov 26 11:40:10 compute-0 hungry_newton[101348]:         "osd_uuid": "2627095b-eef8-4027-bfef-68bf7cb6801f",
Nov 26 11:40:10 compute-0 hungry_newton[101348]:         "type": "bluestore"
Nov 26 11:40:10 compute-0 hungry_newton[101348]:     },
Nov 26 11:40:10 compute-0 hungry_newton[101348]:     "a9ad59a0-aa2e-4d92-b571-519d2d145b6a": {
Nov 26 11:40:10 compute-0 hungry_newton[101348]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:40:10 compute-0 hungry_newton[101348]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 11:40:10 compute-0 hungry_newton[101348]:         "osd_id": 0,
Nov 26 11:40:10 compute-0 hungry_newton[101348]:         "osd_uuid": "a9ad59a0-aa2e-4d92-b571-519d2d145b6a",
Nov 26 11:40:10 compute-0 hungry_newton[101348]:         "type": "bluestore"
Nov 26 11:40:10 compute-0 hungry_newton[101348]:     },
Nov 26 11:40:10 compute-0 hungry_newton[101348]:     "d56156fb-7361-4bef-b06b-1320109b4323": {
Nov 26 11:40:10 compute-0 hungry_newton[101348]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:40:10 compute-0 hungry_newton[101348]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 11:40:10 compute-0 hungry_newton[101348]:         "osd_id": 2,
Nov 26 11:40:10 compute-0 hungry_newton[101348]:         "osd_uuid": "d56156fb-7361-4bef-b06b-1320109b4323",
Nov 26 11:40:10 compute-0 hungry_newton[101348]:         "type": "bluestore"
Nov 26 11:40:10 compute-0 hungry_newton[101348]:     }
Nov 26 11:40:10 compute-0 hungry_newton[101348]: }
Nov 26 11:40:10 compute-0 podman[101392]: 2025-11-26 11:40:10.848416548 +0000 UTC m=+0.544570444 container remove 599144c6ae30ef2d1604bb75495ca14d0e8d3eb3e359348a23051da845e6910d (image=quay.io/ceph/ceph:v18, name=mystifying_kalam, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:40:10 compute-0 systemd[1]: libpod-conmon-599144c6ae30ef2d1604bb75495ca14d0e8d3eb3e359348a23051da845e6910d.scope: Deactivated successfully.
Nov 26 11:40:10 compute-0 sudo[101389]: pam_unix(sudo:session): session closed for user root
Nov 26 11:40:10 compute-0 systemd[1]: libpod-7e2508d3430d85b670054d7148e400e1fcb04d9dfe7ee196d1b8ed6b6da31a63.scope: Deactivated successfully.
Nov 26 11:40:10 compute-0 podman[101334]: 2025-11-26 11:40:10.867397221 +0000 UTC m=+0.891535521 container died 7e2508d3430d85b670054d7148e400e1fcb04d9dfe7ee196d1b8ed6b6da31a63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_newton, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 26 11:40:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e01a06e94cf3ff2b3da7fa3ba75026c2a0ebeb0d6bb6137c0c32f07378c576a-merged.mount: Deactivated successfully.
Nov 26 11:40:10 compute-0 podman[101334]: 2025-11-26 11:40:10.895956535 +0000 UTC m=+0.920094835 container remove 7e2508d3430d85b670054d7148e400e1fcb04d9dfe7ee196d1b8ed6b6da31a63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_newton, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:40:10 compute-0 systemd[1]: libpod-conmon-7e2508d3430d85b670054d7148e400e1fcb04d9dfe7ee196d1b8ed6b6da31a63.scope: Deactivated successfully.
Nov 26 11:40:10 compute-0 sudo[101241]: pam_unix(sudo:session): session closed for user root
Nov 26 11:40:10 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 11:40:10 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:40:10 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 11:40:10 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:40:10 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 98d3a1ed-5027-4f20-b309-53678abf5381 does not exist
Nov 26 11:40:10 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 5e519a38-6f7b-44c9-a7a3-904df0946b57 does not exist
Nov 26 11:40:10 compute-0 sudo[101478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:40:10 compute-0 sudo[101478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:40:10 compute-0 sudo[101478]: pam_unix(sudo:session): session closed for user root
Nov 26 11:40:11 compute-0 sudo[101503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 26 11:40:11 compute-0 sudo[101503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:40:11 compute-0 sudo[101503]: pam_unix(sudo:session): session closed for user root
Nov 26 11:40:11 compute-0 sudo[101528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:40:11 compute-0 sudo[101528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:40:11 compute-0 sudo[101528]: pam_unix(sudo:session): session closed for user root
Nov 26 11:40:11 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Nov 26 11:40:11 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Nov 26 11:40:11 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Nov 26 11:40:11 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 38 pg[11.0( empty local-lis/les=0/0 n=0 ec=38/38 lis/c=0/0 les/c/f=0/0/0 sis=38) [1] r=0 lpr=38 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:11 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Nov 26 11:40:11 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2049731117' entity='client.rgw.rgw.compute-0.oyquem' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 26 11:40:11 compute-0 sudo[101553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:40:11 compute-0 sudo[101553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:40:11 compute-0 sudo[101553]: pam_unix(sudo:session): session closed for user root
Nov 26 11:40:11 compute-0 sudo[101578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:40:11 compute-0 sudo[101578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:40:11 compute-0 sudo[101578]: pam_unix(sudo:session): session closed for user root
Nov 26 11:40:11 compute-0 sudo[101603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 26 11:40:11 compute-0 sudo[101603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:40:11 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v74: 135 pgs: 2 unknown, 133 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 4.5 KiB/s wr, 13 op/s
Nov 26 11:40:11 compute-0 ceph-mds[100145]: mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Nov 26 11:40:11 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mds-cephfs-compute-0-hvqwax[100141]: 2025-11-26T11:40:11.404+0000 7f7f5309a640 -1 mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Nov 26 11:40:11 compute-0 ceph-mon[74928]: 4.16 scrub starts
Nov 26 11:40:11 compute-0 ceph-mon[74928]: 4.16 scrub ok
Nov 26 11:40:11 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3480678431' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 26 11:40:11 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:40:11 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:40:11 compute-0 ceph-mon[74928]: osdmap e38: 3 total, 3 up, 3 in
Nov 26 11:40:11 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/2049731117' entity='client.rgw.rgw.compute-0.oyquem' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 26 11:40:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:40:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:40:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:40:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:40:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:40:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:40:11 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e38 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:40:11 compute-0 podman[101683]: 2025-11-26 11:40:11.542830632 +0000 UTC m=+0.036518798 container exec 810eaed6cbde00330c0646cedaef5c2ae94579236d67ba42b8ef02d055d04ad5 (image=quay.io/ceph/ceph:v18, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Nov 26 11:40:11 compute-0 sudo[101723]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwiwuyqqheddgmczumpurcgbkqaimyfh ; /usr/bin/python3'
Nov 26 11:40:11 compute-0 sudo[101723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:40:11 compute-0 podman[101683]: 2025-11-26 11:40:11.620199679 +0000 UTC m=+0.113887856 container exec_died 810eaed6cbde00330c0646cedaef5c2ae94579236d67ba42b8ef02d055d04ad5 (image=quay.io/ceph/ceph:v18, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:40:11 compute-0 python3[101725]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:40:11 compute-0 podman[101754]: 2025-11-26 11:40:11.748291878 +0000 UTC m=+0.031833720 container create 6b1b8f9dc9508e53112bf96e539641b99dd919cfd217fdf93d3f463226e36db3 (image=quay.io/ceph/ceph:v18, name=interesting_pare, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 26 11:40:11 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 5.8 deep-scrub starts
Nov 26 11:40:11 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 5.8 deep-scrub ok
Nov 26 11:40:11 compute-0 systemd[1]: Started libpod-conmon-6b1b8f9dc9508e53112bf96e539641b99dd919cfd217fdf93d3f463226e36db3.scope.
Nov 26 11:40:11 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:40:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64f8887e1fadb347071bce63b5411d2469860b1d183b67a54841a62b14e6d2d0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:40:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64f8887e1fadb347071bce63b5411d2469860b1d183b67a54841a62b14e6d2d0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:40:11 compute-0 podman[101754]: 2025-11-26 11:40:11.803171522 +0000 UTC m=+0.086713374 container init 6b1b8f9dc9508e53112bf96e539641b99dd919cfd217fdf93d3f463226e36db3 (image=quay.io/ceph/ceph:v18, name=interesting_pare, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 26 11:40:11 compute-0 podman[101754]: 2025-11-26 11:40:11.808728465 +0000 UTC m=+0.092270297 container start 6b1b8f9dc9508e53112bf96e539641b99dd919cfd217fdf93d3f463226e36db3 (image=quay.io/ceph/ceph:v18, name=interesting_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:40:11 compute-0 podman[101754]: 2025-11-26 11:40:11.810137032 +0000 UTC m=+0.093678864 container attach 6b1b8f9dc9508e53112bf96e539641b99dd919cfd217fdf93d3f463226e36db3 (image=quay.io/ceph/ceph:v18, name=interesting_pare, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 26 11:40:11 compute-0 podman[101754]: 2025-11-26 11:40:11.734523376 +0000 UTC m=+0.018065228 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:40:12 compute-0 sudo[101603]: pam_unix(sudo:session): session closed for user root
Nov 26 11:40:12 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 11:40:12 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:40:12 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 11:40:12 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:40:12 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 11:40:12 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:40:12 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 11:40:12 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 11:40:12 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 11:40:12 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:40:12 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 6ba39168-aaa5-4229-8209-795dcdd127cb does not exist
Nov 26 11:40:12 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 47b7269d-0cd4-4ba1-897c-1aaac399fe91 does not exist
Nov 26 11:40:12 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 8ed157cd-2feb-44a3-a6bc-cd81cf517649 does not exist
Nov 26 11:40:12 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 11:40:12 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 11:40:12 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 11:40:12 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 11:40:12 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 11:40:12 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:40:12 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Nov 26 11:40:12 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2049731117' entity='client.rgw.rgw.compute-0.oyquem' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 26 11:40:12 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Nov 26 11:40:12 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Nov 26 11:40:12 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Nov 26 11:40:12 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2049731117' entity='client.rgw.rgw.compute-0.oyquem' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 26 11:40:12 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 39 pg[11.0( empty local-lis/les=38/39 n=0 ec=38/38 lis/c=0/0 les/c/f=0/0/0 sis=38) [1] r=0 lpr=38 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:12 compute-0 sudo[101872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:40:12 compute-0 sudo[101872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:40:12 compute-0 sudo[101872]: pam_unix(sudo:session): session closed for user root
Nov 26 11:40:12 compute-0 sudo[101897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:40:12 compute-0 sudo[101897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:40:12 compute-0 sudo[101897]: pam_unix(sudo:session): session closed for user root
Nov 26 11:40:12 compute-0 sudo[101922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:40:12 compute-0 sudo[101922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:40:12 compute-0 sudo[101922]: pam_unix(sudo:session): session closed for user root
Nov 26 11:40:12 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0) v1
Nov 26 11:40:12 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2337429234' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Nov 26 11:40:12 compute-0 interesting_pare[101781]: mimic
Nov 26 11:40:12 compute-0 sudo[101947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 26 11:40:12 compute-0 sudo[101947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:40:12 compute-0 systemd[1]: libpod-6b1b8f9dc9508e53112bf96e539641b99dd919cfd217fdf93d3f463226e36db3.scope: Deactivated successfully.
Nov 26 11:40:12 compute-0 podman[101974]: 2025-11-26 11:40:12.275970367 +0000 UTC m=+0.015648599 container died 6b1b8f9dc9508e53112bf96e539641b99dd919cfd217fdf93d3f463226e36db3 (image=quay.io/ceph/ceph:v18, name=interesting_pare, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:40:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-64f8887e1fadb347071bce63b5411d2469860b1d183b67a54841a62b14e6d2d0-merged.mount: Deactivated successfully.
Nov 26 11:40:12 compute-0 podman[101974]: 2025-11-26 11:40:12.296311819 +0000 UTC m=+0.035990030 container remove 6b1b8f9dc9508e53112bf96e539641b99dd919cfd217fdf93d3f463226e36db3 (image=quay.io/ceph/ceph:v18, name=interesting_pare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:40:12 compute-0 systemd[1]: libpod-conmon-6b1b8f9dc9508e53112bf96e539641b99dd919cfd217fdf93d3f463226e36db3.scope: Deactivated successfully.
Nov 26 11:40:12 compute-0 sudo[101723]: pam_unix(sudo:session): session closed for user root
Nov 26 11:40:12 compute-0 ceph-mon[74928]: pgmap v74: 135 pgs: 2 unknown, 133 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 4.5 KiB/s wr, 13 op/s
Nov 26 11:40:12 compute-0 ceph-mon[74928]: 5.8 deep-scrub starts
Nov 26 11:40:12 compute-0 ceph-mon[74928]: 5.8 deep-scrub ok
Nov 26 11:40:12 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:40:12 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:40:12 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:40:12 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 11:40:12 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:40:12 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 11:40:12 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 11:40:12 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:40:12 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/2049731117' entity='client.rgw.rgw.compute-0.oyquem' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 26 11:40:12 compute-0 ceph-mon[74928]: osdmap e39: 3 total, 3 up, 3 in
Nov 26 11:40:12 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/2049731117' entity='client.rgw.rgw.compute-0.oyquem' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 26 11:40:12 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/2337429234' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Nov 26 11:40:12 compute-0 podman[102016]: 2025-11-26 11:40:12.475070735 +0000 UTC m=+0.026816807 container create cdb55815042dd72aad6debc6ed12cd08e409f807febc776efa3ea0fe4359aea9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 26 11:40:12 compute-0 systemd[1]: Started libpod-conmon-cdb55815042dd72aad6debc6ed12cd08e409f807febc776efa3ea0fe4359aea9.scope.
Nov 26 11:40:12 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:40:12 compute-0 podman[102016]: 2025-11-26 11:40:12.529022538 +0000 UTC m=+0.080768610 container init cdb55815042dd72aad6debc6ed12cd08e409f807febc776efa3ea0fe4359aea9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:40:12 compute-0 podman[102016]: 2025-11-26 11:40:12.534048579 +0000 UTC m=+0.085794642 container start cdb55815042dd72aad6debc6ed12cd08e409f807febc776efa3ea0fe4359aea9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_moser, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:40:12 compute-0 gracious_moser[102029]: 167 167
Nov 26 11:40:12 compute-0 podman[102016]: 2025-11-26 11:40:12.536215436 +0000 UTC m=+0.087961519 container attach cdb55815042dd72aad6debc6ed12cd08e409f807febc776efa3ea0fe4359aea9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 26 11:40:12 compute-0 systemd[1]: libpod-cdb55815042dd72aad6debc6ed12cd08e409f807febc776efa3ea0fe4359aea9.scope: Deactivated successfully.
Nov 26 11:40:12 compute-0 podman[102016]: 2025-11-26 11:40:12.537070079 +0000 UTC m=+0.088816141 container died cdb55815042dd72aad6debc6ed12cd08e409f807febc776efa3ea0fe4359aea9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:40:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-eaab4044ffbf29141ea88c5787c15c4720e60a9823634fda605d5871f9026af3-merged.mount: Deactivated successfully.
Nov 26 11:40:12 compute-0 podman[102016]: 2025-11-26 11:40:12.556627301 +0000 UTC m=+0.108373363 container remove cdb55815042dd72aad6debc6ed12cd08e409f807febc776efa3ea0fe4359aea9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_moser, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:40:12 compute-0 podman[102016]: 2025-11-26 11:40:12.46408354 +0000 UTC m=+0.015829601 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:40:12 compute-0 systemd[1]: libpod-conmon-cdb55815042dd72aad6debc6ed12cd08e409f807febc776efa3ea0fe4359aea9.scope: Deactivated successfully.
Nov 26 11:40:12 compute-0 podman[102051]: 2025-11-26 11:40:12.667721125 +0000 UTC m=+0.028293483 container create 101dd9e1a4aef7dedce8a715308e6a051cfc533723aec757016e24c1397eceb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_zhukovsky, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 26 11:40:12 compute-0 systemd[1]: Started libpod-conmon-101dd9e1a4aef7dedce8a715308e6a051cfc533723aec757016e24c1397eceb4.scope.
Nov 26 11:40:12 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:40:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08feaae4fb0ea154e84f9b5d792ffa98950b01909eaad33638bf1a640d81ee5d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:40:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08feaae4fb0ea154e84f9b5d792ffa98950b01909eaad33638bf1a640d81ee5d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:40:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08feaae4fb0ea154e84f9b5d792ffa98950b01909eaad33638bf1a640d81ee5d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:40:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08feaae4fb0ea154e84f9b5d792ffa98950b01909eaad33638bf1a640d81ee5d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:40:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08feaae4fb0ea154e84f9b5d792ffa98950b01909eaad33638bf1a640d81ee5d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 11:40:12 compute-0 podman[102051]: 2025-11-26 11:40:12.729012123 +0000 UTC m=+0.089584481 container init 101dd9e1a4aef7dedce8a715308e6a051cfc533723aec757016e24c1397eceb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_zhukovsky, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:40:12 compute-0 podman[102051]: 2025-11-26 11:40:12.734018467 +0000 UTC m=+0.094590824 container start 101dd9e1a4aef7dedce8a715308e6a051cfc533723aec757016e24c1397eceb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 26 11:40:12 compute-0 podman[102051]: 2025-11-26 11:40:12.735315665 +0000 UTC m=+0.095888021 container attach 101dd9e1a4aef7dedce8a715308e6a051cfc533723aec757016e24c1397eceb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_zhukovsky, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:40:12 compute-0 podman[102051]: 2025-11-26 11:40:12.656226042 +0000 UTC m=+0.016798419 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:40:13 compute-0 sudo[102092]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bexvqsjiluxkpxmnihgbghaacugwwtdy ; /usr/bin/python3'
Nov 26 11:40:13 compute-0 sudo[102092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:40:13 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Nov 26 11:40:13 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2049731117' entity='client.rgw.rgw.compute-0.oyquem' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 26 11:40:13 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Nov 26 11:40:13 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Nov 26 11:40:13 compute-0 python3[102094]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:40:13 compute-0 radosgw[99725]: LDAP not started since no server URIs were provided in the configuration.
Nov 26 11:40:13 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-rgw-rgw-compute-0-oyquem[99698]: 2025-11-26T11:40:13.148+0000 7fdbf5769940 -1 LDAP not started since no server URIs were provided in the configuration.
Nov 26 11:40:13 compute-0 radosgw[99725]: framework: beast
Nov 26 11:40:13 compute-0 radosgw[99725]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Nov 26 11:40:13 compute-0 radosgw[99725]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Nov 26 11:40:13 compute-0 podman[102095]: 2025-11-26 11:40:13.177544693 +0000 UTC m=+0.032047474 container create 157fc5b2d771f08161bad94f4c420b70939bd5eac5a46c2dc70860ab0db47fc6 (image=quay.io/ceph/ceph:v18, name=blissful_allen, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 26 11:40:13 compute-0 radosgw[99725]: starting handler: beast
Nov 26 11:40:13 compute-0 radosgw[99725]: set uid:gid to 167:167 (ceph:ceph)
Nov 26 11:40:13 compute-0 systemd[1]: Started libpod-conmon-157fc5b2d771f08161bad94f4c420b70939bd5eac5a46c2dc70860ab0db47fc6.scope.
Nov 26 11:40:13 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:40:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b70483f29e3484e452f21cdabc1b1816e69a91f8f4c5b89e0450d2088c3d51a9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:40:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b70483f29e3484e452f21cdabc1b1816e69a91f8f4c5b89e0450d2088c3d51a9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:40:13 compute-0 radosgw[99725]: mgrc service_daemon_register rgw.14273 metadata {arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC 7763 64-Core Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.oyquem,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025,kernel_version=5.14.0-642.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7865360,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=431f18d9-a144-45cb-98de-3bf65d0bac42,zone_name=default,zonegroup_id=f774fc41-a0f8-4138-8bd0-1d9ce9afd02c,zonegroup_name=default}
Nov 26 11:40:13 compute-0 podman[102095]: 2025-11-26 11:40:13.235821435 +0000 UTC m=+0.090324226 container init 157fc5b2d771f08161bad94f4c420b70939bd5eac5a46c2dc70860ab0db47fc6 (image=quay.io/ceph/ceph:v18, name=blissful_allen, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 26 11:40:13 compute-0 podman[102095]: 2025-11-26 11:40:13.240878484 +0000 UTC m=+0.095381256 container start 157fc5b2d771f08161bad94f4c420b70939bd5eac5a46c2dc70860ab0db47fc6 (image=quay.io/ceph/ceph:v18, name=blissful_allen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:40:13 compute-0 podman[102095]: 2025-11-26 11:40:13.243661796 +0000 UTC m=+0.098164566 container attach 157fc5b2d771f08161bad94f4c420b70939bd5eac5a46c2dc70860ab0db47fc6 (image=quay.io/ceph/ceph:v18, name=blissful_allen, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 26 11:40:13 compute-0 podman[102095]: 2025-11-26 11:40:13.164961326 +0000 UTC m=+0.019464118 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:40:13 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v77: 135 pgs: 1 unknown, 134 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s wr, 11 op/s
Nov 26 11:40:13 compute-0 strange_zhukovsky[102064]: --> passed data devices: 0 physical, 3 LVM
Nov 26 11:40:13 compute-0 strange_zhukovsky[102064]: --> relative data size: 1.0
Nov 26 11:40:13 compute-0 strange_zhukovsky[102064]: --> All data devices are unavailable
Nov 26 11:40:13 compute-0 systemd[1]: libpod-101dd9e1a4aef7dedce8a715308e6a051cfc533723aec757016e24c1397eceb4.scope: Deactivated successfully.
Nov 26 11:40:13 compute-0 podman[102051]: 2025-11-26 11:40:13.604233929 +0000 UTC m=+0.964806286 container died 101dd9e1a4aef7dedce8a715308e6a051cfc533723aec757016e24c1397eceb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_zhukovsky, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:40:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-08feaae4fb0ea154e84f9b5d792ffa98950b01909eaad33638bf1a640d81ee5d-merged.mount: Deactivated successfully.
Nov 26 11:40:13 compute-0 podman[102051]: 2025-11-26 11:40:13.638862231 +0000 UTC m=+0.999434588 container remove 101dd9e1a4aef7dedce8a715308e6a051cfc533723aec757016e24c1397eceb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 26 11:40:13 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Nov 26 11:40:13 compute-0 systemd[1]: libpod-conmon-101dd9e1a4aef7dedce8a715308e6a051cfc533723aec757016e24c1397eceb4.scope: Deactivated successfully.
Nov 26 11:40:13 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Nov 26 11:40:13 compute-0 sudo[101947]: pam_unix(sudo:session): session closed for user root
Nov 26 11:40:13 compute-0 sudo[102708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:40:13 compute-0 sudo[102708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:40:13 compute-0 sudo[102708]: pam_unix(sudo:session): session closed for user root
Nov 26 11:40:13 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 5.a deep-scrub starts
Nov 26 11:40:13 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 5.a deep-scrub ok
Nov 26 11:40:13 compute-0 sudo[102733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:40:13 compute-0 sudo[102733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:40:13 compute-0 sudo[102733]: pam_unix(sudo:session): session closed for user root
Nov 26 11:40:13 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions", "format": "json"} v 0) v1
Nov 26 11:40:13 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/833389256' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Nov 26 11:40:13 compute-0 blissful_allen[102174]: 
Nov 26 11:40:13 compute-0 systemd[1]: libpod-157fc5b2d771f08161bad94f4c420b70939bd5eac5a46c2dc70860ab0db47fc6.scope: Deactivated successfully.
Nov 26 11:40:13 compute-0 conmon[102174]: conmon 157fc5b2d771f08161ba <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-157fc5b2d771f08161bad94f4c420b70939bd5eac5a46c2dc70860ab0db47fc6.scope/container/memory.events
Nov 26 11:40:13 compute-0 blissful_allen[102174]: {"mon":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"mgr":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"osd":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"mds":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"overall":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":6}}
Nov 26 11:40:13 compute-0 podman[102095]: 2025-11-26 11:40:13.782801043 +0000 UTC m=+0.637303824 container died 157fc5b2d771f08161bad94f4c420b70939bd5eac5a46c2dc70860ab0db47fc6 (image=quay.io/ceph/ceph:v18, name=blissful_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:40:13 compute-0 sudo[102758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:40:13 compute-0 sudo[102758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:40:13 compute-0 sudo[102758]: pam_unix(sudo:session): session closed for user root
Nov 26 11:40:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-b70483f29e3484e452f21cdabc1b1816e69a91f8f4c5b89e0450d2088c3d51a9-merged.mount: Deactivated successfully.
Nov 26 11:40:13 compute-0 podman[102095]: 2025-11-26 11:40:13.809953273 +0000 UTC m=+0.664456043 container remove 157fc5b2d771f08161bad94f4c420b70939bd5eac5a46c2dc70860ab0db47fc6 (image=quay.io/ceph/ceph:v18, name=blissful_allen, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 26 11:40:13 compute-0 systemd[1]: libpod-conmon-157fc5b2d771f08161bad94f4c420b70939bd5eac5a46c2dc70860ab0db47fc6.scope: Deactivated successfully.
Nov 26 11:40:13 compute-0 sudo[102092]: pam_unix(sudo:session): session closed for user root
Nov 26 11:40:13 compute-0 sudo[102792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -- lvm list --format json
Nov 26 11:40:13 compute-0 sudo[102792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:40:14 compute-0 podman[102850]: 2025-11-26 11:40:14.07078096 +0000 UTC m=+0.029062443 container create 42baf2ac64cb4265e082895f42b7084e0f7045d2cd530c4a545989888788c792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_shamir, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:40:14 compute-0 systemd[1]: Started libpod-conmon-42baf2ac64cb4265e082895f42b7084e0f7045d2cd530c4a545989888788c792.scope.
Nov 26 11:40:14 compute-0 ceph-mon[74928]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 26 11:40:14 compute-0 ceph-mon[74928]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 26 11:40:14 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/2049731117' entity='client.rgw.rgw.compute-0.oyquem' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 26 11:40:14 compute-0 ceph-mon[74928]: osdmap e40: 3 total, 3 up, 3 in
Nov 26 11:40:14 compute-0 ceph-mon[74928]: pgmap v77: 135 pgs: 1 unknown, 134 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s wr, 11 op/s
Nov 26 11:40:14 compute-0 ceph-mon[74928]: 4.17 scrub starts
Nov 26 11:40:14 compute-0 ceph-mon[74928]: 4.17 scrub ok
Nov 26 11:40:14 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/833389256' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Nov 26 11:40:14 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:40:14 compute-0 podman[102850]: 2025-11-26 11:40:14.121842953 +0000 UTC m=+0.080124426 container init 42baf2ac64cb4265e082895f42b7084e0f7045d2cd530c4a545989888788c792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_shamir, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:40:14 compute-0 podman[102850]: 2025-11-26 11:40:14.126412482 +0000 UTC m=+0.084693955 container start 42baf2ac64cb4265e082895f42b7084e0f7045d2cd530c4a545989888788c792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 26 11:40:14 compute-0 podman[102850]: 2025-11-26 11:40:14.127520743 +0000 UTC m=+0.085802216 container attach 42baf2ac64cb4265e082895f42b7084e0f7045d2cd530c4a545989888788c792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_shamir, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:40:14 compute-0 systemd[1]: libpod-42baf2ac64cb4265e082895f42b7084e0f7045d2cd530c4a545989888788c792.scope: Deactivated successfully.
Nov 26 11:40:14 compute-0 elegant_shamir[102863]: 167 167
Nov 26 11:40:14 compute-0 conmon[102863]: conmon 42baf2ac64cb4265e082 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-42baf2ac64cb4265e082895f42b7084e0f7045d2cd530c4a545989888788c792.scope/container/memory.events
Nov 26 11:40:14 compute-0 podman[102850]: 2025-11-26 11:40:14.130761126 +0000 UTC m=+0.089042600 container died 42baf2ac64cb4265e082895f42b7084e0f7045d2cd530c4a545989888788c792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:40:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-93529cfa69e05ab25b128992869667943a8ff40bbeef7a74e316200bd3081f62-merged.mount: Deactivated successfully.
Nov 26 11:40:14 compute-0 podman[102850]: 2025-11-26 11:40:14.150832867 +0000 UTC m=+0.109114340 container remove 42baf2ac64cb4265e082895f42b7084e0f7045d2cd530c4a545989888788c792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_shamir, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:40:14 compute-0 podman[102850]: 2025-11-26 11:40:14.057719142 +0000 UTC m=+0.016000615 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:40:14 compute-0 systemd[1]: libpod-conmon-42baf2ac64cb4265e082895f42b7084e0f7045d2cd530c4a545989888788c792.scope: Deactivated successfully.
Nov 26 11:40:14 compute-0 podman[102884]: 2025-11-26 11:40:14.261402957 +0000 UTC m=+0.028961761 container create e87d95a2a6250b78d104e933181508666eb7a6a0000a6e147f26bf2a464fe2c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_wu, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:40:14 compute-0 systemd[1]: Started libpod-conmon-e87d95a2a6250b78d104e933181508666eb7a6a0000a6e147f26bf2a464fe2c4.scope.
Nov 26 11:40:14 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 3.19 deep-scrub starts
Nov 26 11:40:14 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:40:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f35abf52ee2f2f7cbc37de39653e638c1505f9fec91e4b4d571ff39ce42bc8a9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:40:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f35abf52ee2f2f7cbc37de39653e638c1505f9fec91e4b4d571ff39ce42bc8a9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:40:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f35abf52ee2f2f7cbc37de39653e638c1505f9fec91e4b4d571ff39ce42bc8a9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:40:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f35abf52ee2f2f7cbc37de39653e638c1505f9fec91e4b4d571ff39ce42bc8a9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:40:14 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 3.19 deep-scrub ok
Nov 26 11:40:14 compute-0 podman[102884]: 2025-11-26 11:40:14.321877807 +0000 UTC m=+0.089436620 container init e87d95a2a6250b78d104e933181508666eb7a6a0000a6e147f26bf2a464fe2c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 26 11:40:14 compute-0 podman[102884]: 2025-11-26 11:40:14.327150748 +0000 UTC m=+0.094709551 container start e87d95a2a6250b78d104e933181508666eb7a6a0000a6e147f26bf2a464fe2c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_wu, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 26 11:40:14 compute-0 podman[102884]: 2025-11-26 11:40:14.328382836 +0000 UTC m=+0.095941639 container attach e87d95a2a6250b78d104e933181508666eb7a6a0000a6e147f26bf2a464fe2c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_wu, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 26 11:40:14 compute-0 podman[102884]: 2025-11-26 11:40:14.250415081 +0000 UTC m=+0.017973894 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:40:14 compute-0 jolly_wu[102898]: {
Nov 26 11:40:14 compute-0 jolly_wu[102898]:     "0": [
Nov 26 11:40:14 compute-0 jolly_wu[102898]:         {
Nov 26 11:40:14 compute-0 jolly_wu[102898]:             "devices": [
Nov 26 11:40:14 compute-0 jolly_wu[102898]:                 "/dev/loop3"
Nov 26 11:40:14 compute-0 jolly_wu[102898]:             ],
Nov 26 11:40:14 compute-0 jolly_wu[102898]:             "lv_name": "ceph_lv0",
Nov 26 11:40:14 compute-0 jolly_wu[102898]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:40:14 compute-0 jolly_wu[102898]:             "lv_size": "21470642176",
Nov 26 11:40:14 compute-0 jolly_wu[102898]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a9ad59a0-aa2e-4d92-b571-519d2d145b6a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:40:14 compute-0 jolly_wu[102898]:             "lv_uuid": "Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ",
Nov 26 11:40:14 compute-0 jolly_wu[102898]:             "name": "ceph_lv0",
Nov 26 11:40:14 compute-0 jolly_wu[102898]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:40:14 compute-0 jolly_wu[102898]:             "tags": {
Nov 26 11:40:14 compute-0 jolly_wu[102898]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:40:14 compute-0 jolly_wu[102898]:                 "ceph.block_uuid": "Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ",
Nov 26 11:40:14 compute-0 jolly_wu[102898]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:40:14 compute-0 jolly_wu[102898]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:40:14 compute-0 jolly_wu[102898]:                 "ceph.cluster_name": "ceph",
Nov 26 11:40:14 compute-0 jolly_wu[102898]:                 "ceph.crush_device_class": "",
Nov 26 11:40:14 compute-0 jolly_wu[102898]:                 "ceph.encrypted": "0",
Nov 26 11:40:14 compute-0 jolly_wu[102898]:                 "ceph.osd_fsid": "a9ad59a0-aa2e-4d92-b571-519d2d145b6a",
Nov 26 11:40:14 compute-0 jolly_wu[102898]:                 "ceph.osd_id": "0",
Nov 26 11:40:14 compute-0 jolly_wu[102898]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:40:14 compute-0 jolly_wu[102898]:                 "ceph.type": "block",
Nov 26 11:40:14 compute-0 jolly_wu[102898]:                 "ceph.vdo": "0"
Nov 26 11:40:14 compute-0 jolly_wu[102898]:             },
Nov 26 11:40:14 compute-0 jolly_wu[102898]:             "type": "block",
Nov 26 11:40:14 compute-0 jolly_wu[102898]:             "vg_name": "ceph_vg0"
Nov 26 11:40:14 compute-0 jolly_wu[102898]:         }
Nov 26 11:40:14 compute-0 jolly_wu[102898]:     ],
Nov 26 11:40:14 compute-0 jolly_wu[102898]:     "1": [
Nov 26 11:40:14 compute-0 jolly_wu[102898]:         {
Nov 26 11:40:14 compute-0 jolly_wu[102898]:             "devices": [
Nov 26 11:40:14 compute-0 jolly_wu[102898]:                 "/dev/loop4"
Nov 26 11:40:14 compute-0 jolly_wu[102898]:             ],
Nov 26 11:40:14 compute-0 jolly_wu[102898]:             "lv_name": "ceph_lv1",
Nov 26 11:40:14 compute-0 jolly_wu[102898]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:40:14 compute-0 jolly_wu[102898]:             "lv_size": "21470642176",
Nov 26 11:40:14 compute-0 jolly_wu[102898]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2627095b-eef8-4027-bfef-68bf7cb6801f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:40:14 compute-0 jolly_wu[102898]:             "lv_uuid": "T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS",
Nov 26 11:40:14 compute-0 jolly_wu[102898]:             "name": "ceph_lv1",
Nov 26 11:40:14 compute-0 jolly_wu[102898]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:40:14 compute-0 jolly_wu[102898]:             "tags": {
Nov 26 11:40:14 compute-0 jolly_wu[102898]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:40:14 compute-0 jolly_wu[102898]:                 "ceph.block_uuid": "T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS",
Nov 26 11:40:14 compute-0 jolly_wu[102898]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:40:14 compute-0 jolly_wu[102898]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:40:14 compute-0 jolly_wu[102898]:                 "ceph.cluster_name": "ceph",
Nov 26 11:40:14 compute-0 jolly_wu[102898]:                 "ceph.crush_device_class": "",
Nov 26 11:40:14 compute-0 jolly_wu[102898]:                 "ceph.encrypted": "0",
Nov 26 11:40:14 compute-0 jolly_wu[102898]:                 "ceph.osd_fsid": "2627095b-eef8-4027-bfef-68bf7cb6801f",
Nov 26 11:40:14 compute-0 jolly_wu[102898]:                 "ceph.osd_id": "1",
Nov 26 11:40:14 compute-0 jolly_wu[102898]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:40:14 compute-0 jolly_wu[102898]:                 "ceph.type": "block",
Nov 26 11:40:14 compute-0 jolly_wu[102898]:                 "ceph.vdo": "0"
Nov 26 11:40:14 compute-0 jolly_wu[102898]:             },
Nov 26 11:40:14 compute-0 jolly_wu[102898]:             "type": "block",
Nov 26 11:40:14 compute-0 jolly_wu[102898]:             "vg_name": "ceph_vg1"
Nov 26 11:40:14 compute-0 jolly_wu[102898]:         }
Nov 26 11:40:14 compute-0 jolly_wu[102898]:     ],
Nov 26 11:40:14 compute-0 jolly_wu[102898]:     "2": [
Nov 26 11:40:14 compute-0 jolly_wu[102898]:         {
Nov 26 11:40:14 compute-0 jolly_wu[102898]:             "devices": [
Nov 26 11:40:14 compute-0 jolly_wu[102898]:                 "/dev/loop5"
Nov 26 11:40:14 compute-0 jolly_wu[102898]:             ],
Nov 26 11:40:14 compute-0 jolly_wu[102898]:             "lv_name": "ceph_lv2",
Nov 26 11:40:14 compute-0 jolly_wu[102898]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:40:14 compute-0 jolly_wu[102898]:             "lv_size": "21470642176",
Nov 26 11:40:14 compute-0 jolly_wu[102898]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d56156fb-7361-4bef-b06b-1320109b4323,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:40:14 compute-0 jolly_wu[102898]:             "lv_uuid": "PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF",
Nov 26 11:40:14 compute-0 jolly_wu[102898]:             "name": "ceph_lv2",
Nov 26 11:40:14 compute-0 jolly_wu[102898]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:40:14 compute-0 jolly_wu[102898]:             "tags": {
Nov 26 11:40:14 compute-0 jolly_wu[102898]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:40:14 compute-0 jolly_wu[102898]:                 "ceph.block_uuid": "PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF",
Nov 26 11:40:14 compute-0 jolly_wu[102898]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:40:14 compute-0 jolly_wu[102898]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:40:14 compute-0 jolly_wu[102898]:                 "ceph.cluster_name": "ceph",
Nov 26 11:40:14 compute-0 jolly_wu[102898]:                 "ceph.crush_device_class": "",
Nov 26 11:40:14 compute-0 jolly_wu[102898]:                 "ceph.encrypted": "0",
Nov 26 11:40:14 compute-0 jolly_wu[102898]:                 "ceph.osd_fsid": "d56156fb-7361-4bef-b06b-1320109b4323",
Nov 26 11:40:14 compute-0 jolly_wu[102898]:                 "ceph.osd_id": "2",
Nov 26 11:40:14 compute-0 jolly_wu[102898]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:40:14 compute-0 jolly_wu[102898]:                 "ceph.type": "block",
Nov 26 11:40:14 compute-0 jolly_wu[102898]:                 "ceph.vdo": "0"
Nov 26 11:40:14 compute-0 jolly_wu[102898]:             },
Nov 26 11:40:14 compute-0 jolly_wu[102898]:             "type": "block",
Nov 26 11:40:14 compute-0 jolly_wu[102898]:             "vg_name": "ceph_vg2"
Nov 26 11:40:14 compute-0 jolly_wu[102898]:         }
Nov 26 11:40:14 compute-0 jolly_wu[102898]:     ]
Nov 26 11:40:14 compute-0 jolly_wu[102898]: }
Nov 26 11:40:14 compute-0 systemd[1]: libpod-e87d95a2a6250b78d104e933181508666eb7a6a0000a6e147f26bf2a464fe2c4.scope: Deactivated successfully.
Nov 26 11:40:14 compute-0 podman[102884]: 2025-11-26 11:40:14.956697476 +0000 UTC m=+0.724256310 container died e87d95a2a6250b78d104e933181508666eb7a6a0000a6e147f26bf2a464fe2c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 26 11:40:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-f35abf52ee2f2f7cbc37de39653e638c1505f9fec91e4b4d571ff39ce42bc8a9-merged.mount: Deactivated successfully.
Nov 26 11:40:14 compute-0 podman[102884]: 2025-11-26 11:40:14.986606074 +0000 UTC m=+0.754164877 container remove e87d95a2a6250b78d104e933181508666eb7a6a0000a6e147f26bf2a464fe2c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_wu, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 26 11:40:14 compute-0 systemd[1]: libpod-conmon-e87d95a2a6250b78d104e933181508666eb7a6a0000a6e147f26bf2a464fe2c4.scope: Deactivated successfully.
Nov 26 11:40:15 compute-0 sudo[102792]: pam_unix(sudo:session): session closed for user root
Nov 26 11:40:15 compute-0 sudo[102916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:40:15 compute-0 sudo[102916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:40:15 compute-0 sudo[102916]: pam_unix(sudo:session): session closed for user root
Nov 26 11:40:15 compute-0 sudo[102941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:40:15 compute-0 sudo[102941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:40:15 compute-0 sudo[102941]: pam_unix(sudo:session): session closed for user root
Nov 26 11:40:15 compute-0 ceph-mon[74928]: 5.a deep-scrub starts
Nov 26 11:40:15 compute-0 ceph-mon[74928]: 5.a deep-scrub ok
Nov 26 11:40:15 compute-0 ceph-mon[74928]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 26 11:40:15 compute-0 ceph-mon[74928]: Cluster is now healthy
Nov 26 11:40:15 compute-0 ceph-mon[74928]: 3.19 deep-scrub starts
Nov 26 11:40:15 compute-0 ceph-mon[74928]: 3.19 deep-scrub ok
Nov 26 11:40:15 compute-0 sudo[102966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:40:15 compute-0 sudo[102966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:40:15 compute-0 sudo[102966]: pam_unix(sudo:session): session closed for user root
Nov 26 11:40:15 compute-0 sudo[102991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -- raw list --format json
Nov 26 11:40:15 compute-0 sudo[102991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:40:15 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Nov 26 11:40:15 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Nov 26 11:40:15 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v78: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 84 KiB/s rd, 9.1 KiB/s wr, 217 op/s
Nov 26 11:40:15 compute-0 podman[103047]: 2025-11-26 11:40:15.400810452 +0000 UTC m=+0.026553791 container create 8d0f5a4542e23bec5e169f686f5aa0e13922734575332a0c7db82a49aec12b05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True)
Nov 26 11:40:15 compute-0 systemd[1]: Started libpod-conmon-8d0f5a4542e23bec5e169f686f5aa0e13922734575332a0c7db82a49aec12b05.scope.
Nov 26 11:40:15 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:40:15 compute-0 podman[103047]: 2025-11-26 11:40:15.454051257 +0000 UTC m=+0.079794615 container init 8d0f5a4542e23bec5e169f686f5aa0e13922734575332a0c7db82a49aec12b05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_vaughan, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 26 11:40:15 compute-0 podman[103047]: 2025-11-26 11:40:15.458846645 +0000 UTC m=+0.084589984 container start 8d0f5a4542e23bec5e169f686f5aa0e13922734575332a0c7db82a49aec12b05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 26 11:40:15 compute-0 podman[103047]: 2025-11-26 11:40:15.460114731 +0000 UTC m=+0.085858070 container attach 8d0f5a4542e23bec5e169f686f5aa0e13922734575332a0c7db82a49aec12b05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_vaughan, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 26 11:40:15 compute-0 musing_vaughan[103061]: 167 167
Nov 26 11:40:15 compute-0 systemd[1]: libpod-8d0f5a4542e23bec5e169f686f5aa0e13922734575332a0c7db82a49aec12b05.scope: Deactivated successfully.
Nov 26 11:40:15 compute-0 podman[103047]: 2025-11-26 11:40:15.462320787 +0000 UTC m=+0.088064126 container died 8d0f5a4542e23bec5e169f686f5aa0e13922734575332a0c7db82a49aec12b05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 26 11:40:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-b1c2a3bb76a96a3ad30b912d2b0e1c899d5a7f76a937a2c3636ac2780eae6025-merged.mount: Deactivated successfully.
Nov 26 11:40:15 compute-0 podman[103047]: 2025-11-26 11:40:15.481180534 +0000 UTC m=+0.106923874 container remove 8d0f5a4542e23bec5e169f686f5aa0e13922734575332a0c7db82a49aec12b05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_vaughan, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 11:40:15 compute-0 podman[103047]: 2025-11-26 11:40:15.389719582 +0000 UTC m=+0.015462940 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:40:15 compute-0 systemd[1]: libpod-conmon-8d0f5a4542e23bec5e169f686f5aa0e13922734575332a0c7db82a49aec12b05.scope: Deactivated successfully.
Nov 26 11:40:15 compute-0 podman[103083]: 2025-11-26 11:40:15.594212556 +0000 UTC m=+0.027870751 container create 6cef8ccae9ba3cc084991631eae5f7ea8ab30039e3236553404894942e5d9e67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_lamarr, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:40:15 compute-0 systemd[1]: Started libpod-conmon-6cef8ccae9ba3cc084991631eae5f7ea8ab30039e3236553404894942e5d9e67.scope.
Nov 26 11:40:15 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:40:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d280731c15edbd57f0f27f54618276f0dedea8a1290398600256014171e4ed3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:40:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d280731c15edbd57f0f27f54618276f0dedea8a1290398600256014171e4ed3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:40:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d280731c15edbd57f0f27f54618276f0dedea8a1290398600256014171e4ed3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:40:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d280731c15edbd57f0f27f54618276f0dedea8a1290398600256014171e4ed3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:40:15 compute-0 podman[103083]: 2025-11-26 11:40:15.639659878 +0000 UTC m=+0.073318071 container init 6cef8ccae9ba3cc084991631eae5f7ea8ab30039e3236553404894942e5d9e67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:40:15 compute-0 podman[103083]: 2025-11-26 11:40:15.644702985 +0000 UTC m=+0.078361179 container start 6cef8ccae9ba3cc084991631eae5f7ea8ab30039e3236553404894942e5d9e67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 26 11:40:15 compute-0 podman[103083]: 2025-11-26 11:40:15.645724844 +0000 UTC m=+0.079383048 container attach 6cef8ccae9ba3cc084991631eae5f7ea8ab30039e3236553404894942e5d9e67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 26 11:40:15 compute-0 podman[103083]: 2025-11-26 11:40:15.582770922 +0000 UTC m=+0.016429137 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:40:16 compute-0 ceph-mon[74928]: 3.1a scrub starts
Nov 26 11:40:16 compute-0 ceph-mon[74928]: 3.1a scrub ok
Nov 26 11:40:16 compute-0 ceph-mon[74928]: pgmap v78: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 84 KiB/s rd, 9.1 KiB/s wr, 217 op/s
Nov 26 11:40:16 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 3.1c deep-scrub starts
Nov 26 11:40:16 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 3.1c deep-scrub ok
Nov 26 11:40:16 compute-0 elegant_lamarr[103097]: {
Nov 26 11:40:16 compute-0 elegant_lamarr[103097]:     "2627095b-eef8-4027-bfef-68bf7cb6801f": {
Nov 26 11:40:16 compute-0 elegant_lamarr[103097]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:40:16 compute-0 elegant_lamarr[103097]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 11:40:16 compute-0 elegant_lamarr[103097]:         "osd_id": 1,
Nov 26 11:40:16 compute-0 elegant_lamarr[103097]:         "osd_uuid": "2627095b-eef8-4027-bfef-68bf7cb6801f",
Nov 26 11:40:16 compute-0 elegant_lamarr[103097]:         "type": "bluestore"
Nov 26 11:40:16 compute-0 elegant_lamarr[103097]:     },
Nov 26 11:40:16 compute-0 elegant_lamarr[103097]:     "a9ad59a0-aa2e-4d92-b571-519d2d145b6a": {
Nov 26 11:40:16 compute-0 elegant_lamarr[103097]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:40:16 compute-0 elegant_lamarr[103097]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 11:40:16 compute-0 elegant_lamarr[103097]:         "osd_id": 0,
Nov 26 11:40:16 compute-0 elegant_lamarr[103097]:         "osd_uuid": "a9ad59a0-aa2e-4d92-b571-519d2d145b6a",
Nov 26 11:40:16 compute-0 elegant_lamarr[103097]:         "type": "bluestore"
Nov 26 11:40:16 compute-0 elegant_lamarr[103097]:     },
Nov 26 11:40:16 compute-0 elegant_lamarr[103097]:     "d56156fb-7361-4bef-b06b-1320109b4323": {
Nov 26 11:40:16 compute-0 elegant_lamarr[103097]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:40:16 compute-0 elegant_lamarr[103097]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 11:40:16 compute-0 elegant_lamarr[103097]:         "osd_id": 2,
Nov 26 11:40:16 compute-0 elegant_lamarr[103097]:         "osd_uuid": "d56156fb-7361-4bef-b06b-1320109b4323",
Nov 26 11:40:16 compute-0 elegant_lamarr[103097]:         "type": "bluestore"
Nov 26 11:40:16 compute-0 elegant_lamarr[103097]:     }
Nov 26 11:40:16 compute-0 elegant_lamarr[103097]: }
Nov 26 11:40:16 compute-0 systemd[1]: libpod-6cef8ccae9ba3cc084991631eae5f7ea8ab30039e3236553404894942e5d9e67.scope: Deactivated successfully.
Nov 26 11:40:16 compute-0 podman[103130]: 2025-11-26 11:40:16.431108565 +0000 UTC m=+0.016273442 container died 6cef8ccae9ba3cc084991631eae5f7ea8ab30039e3236553404894942e5d9e67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_lamarr, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:40:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d280731c15edbd57f0f27f54618276f0dedea8a1290398600256014171e4ed3-merged.mount: Deactivated successfully.
Nov 26 11:40:16 compute-0 podman[103130]: 2025-11-26 11:40:16.459015993 +0000 UTC m=+0.044180870 container remove 6cef8ccae9ba3cc084991631eae5f7ea8ab30039e3236553404894942e5d9e67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_lamarr, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 26 11:40:16 compute-0 systemd[1]: libpod-conmon-6cef8ccae9ba3cc084991631eae5f7ea8ab30039e3236553404894942e5d9e67.scope: Deactivated successfully.
Nov 26 11:40:16 compute-0 sudo[102991]: pam_unix(sudo:session): session closed for user root
Nov 26 11:40:16 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 11:40:16 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:40:16 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 11:40:16 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:40:16 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev f7adcbea-ac90-4623-84f7-ab889e0c6e17 does not exist
Nov 26 11:40:16 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 0147e9d7-fdc1-415b-a89a-dba9b88fc976 does not exist
Nov 26 11:40:16 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:40:16 compute-0 sudo[103141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:40:16 compute-0 sudo[103141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:40:16 compute-0 sudo[103141]: pam_unix(sudo:session): session closed for user root
Nov 26 11:40:16 compute-0 sudo[103166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 26 11:40:16 compute-0 sudo[103166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:40:16 compute-0 sudo[103166]: pam_unix(sudo:session): session closed for user root
Nov 26 11:40:17 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v79: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s rd, 5.4 KiB/s wr, 175 op/s
Nov 26 11:40:17 compute-0 ceph-mon[74928]: 3.1c deep-scrub starts
Nov 26 11:40:17 compute-0 ceph-mon[74928]: 3.1c deep-scrub ok
Nov 26 11:40:17 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:40:17 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:40:17 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Nov 26 11:40:17 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Nov 26 11:40:17 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 5.b scrub starts
Nov 26 11:40:17 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 5.b scrub ok
Nov 26 11:40:18 compute-0 ceph-mon[74928]: pgmap v79: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s rd, 5.4 KiB/s wr, 175 op/s
Nov 26 11:40:18 compute-0 ceph-mon[74928]: 4.19 scrub starts
Nov 26 11:40:18 compute-0 ceph-mon[74928]: 4.19 scrub ok
Nov 26 11:40:18 compute-0 ceph-mon[74928]: 5.b scrub starts
Nov 26 11:40:18 compute-0 ceph-mon[74928]: 5.b scrub ok
Nov 26 11:40:18 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Nov 26 11:40:18 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Nov 26 11:40:19 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Nov 26 11:40:19 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Nov 26 11:40:19 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v80: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 4.2 KiB/s wr, 138 op/s
Nov 26 11:40:19 compute-0 ceph-mon[74928]: 4.1d scrub starts
Nov 26 11:40:19 compute-0 ceph-mon[74928]: 4.1d scrub ok
Nov 26 11:40:19 compute-0 ceph-mon[74928]: 2.1b scrub starts
Nov 26 11:40:19 compute-0 ceph-mon[74928]: 2.1b scrub ok
Nov 26 11:40:19 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Nov 26 11:40:19 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Nov 26 11:40:20 compute-0 ceph-mon[74928]: pgmap v80: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 4.2 KiB/s wr, 138 op/s
Nov 26 11:40:20 compute-0 ceph-mon[74928]: 4.1e scrub starts
Nov 26 11:40:20 compute-0 ceph-mon[74928]: 4.1e scrub ok
Nov 26 11:40:20 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 5.d deep-scrub starts
Nov 26 11:40:20 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 5.d deep-scrub ok
Nov 26 11:40:21 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v81: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 3.7 KiB/s wr, 119 op/s
Nov 26 11:40:21 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:40:21 compute-0 ceph-mon[74928]: 5.d deep-scrub starts
Nov 26 11:40:21 compute-0 ceph-mon[74928]: 5.d deep-scrub ok
Nov 26 11:40:21 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Nov 26 11:40:21 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Nov 26 11:40:22 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Nov 26 11:40:22 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Nov 26 11:40:22 compute-0 ceph-mon[74928]: pgmap v81: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 3.7 KiB/s wr, 119 op/s
Nov 26 11:40:22 compute-0 ceph-mon[74928]: 4.1f scrub starts
Nov 26 11:40:22 compute-0 ceph-mon[74928]: 4.1f scrub ok
Nov 26 11:40:22 compute-0 ceph-mon[74928]: 5.11 scrub starts
Nov 26 11:40:22 compute-0 ceph-mon[74928]: 5.11 scrub ok
Nov 26 11:40:23 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 2.17 deep-scrub starts
Nov 26 11:40:23 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 2.17 deep-scrub ok
Nov 26 11:40:23 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v82: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 3.3 KiB/s wr, 107 op/s
Nov 26 11:40:23 compute-0 ceph-mon[74928]: 2.17 deep-scrub starts
Nov 26 11:40:23 compute-0 ceph-mon[74928]: 2.17 deep-scrub ok
Nov 26 11:40:24 compute-0 ceph-mon[74928]: pgmap v82: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 3.3 KiB/s wr, 107 op/s
Nov 26 11:40:25 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Nov 26 11:40:25 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Nov 26 11:40:25 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v83: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 2.8 KiB/s wr, 92 op/s
Nov 26 11:40:25 compute-0 ceph-mon[74928]: 5.13 scrub starts
Nov 26 11:40:25 compute-0 ceph-mon[74928]: 5.13 scrub ok
Nov 26 11:40:25 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Nov 26 11:40:25 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Nov 26 11:40:26 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:40:26 compute-0 ceph-mon[74928]: pgmap v83: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 2.8 KiB/s wr, 92 op/s
Nov 26 11:40:26 compute-0 ceph-mon[74928]: 2.11 scrub starts
Nov 26 11:40:26 compute-0 ceph-mon[74928]: 2.11 scrub ok
Nov 26 11:40:27 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Nov 26 11:40:27 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Nov 26 11:40:27 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v84: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:40:27 compute-0 ceph-mon[74928]: 2.15 scrub starts
Nov 26 11:40:27 compute-0 ceph-mon[74928]: 2.15 scrub ok
Nov 26 11:40:27 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Nov 26 11:40:27 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Nov 26 11:40:28 compute-0 ceph-mon[74928]: pgmap v84: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:40:28 compute-0 ceph-mon[74928]: 2.13 scrub starts
Nov 26 11:40:28 compute-0 ceph-mon[74928]: 2.13 scrub ok
Nov 26 11:40:29 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v85: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:40:30 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Nov 26 11:40:30 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Nov 26 11:40:30 compute-0 ceph-mon[74928]: pgmap v85: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:40:30 compute-0 ceph-mon[74928]: 5.12 scrub starts
Nov 26 11:40:30 compute-0 ceph-mon[74928]: 5.12 scrub ok
Nov 26 11:40:30 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 5.e scrub starts
Nov 26 11:40:30 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 5.e scrub ok
Nov 26 11:40:31 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v86: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:40:31 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:40:31 compute-0 ceph-mon[74928]: 5.e scrub starts
Nov 26 11:40:31 compute-0 ceph-mon[74928]: 5.e scrub ok
Nov 26 11:40:32 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Nov 26 11:40:32 compute-0 ceph-mon[74928]: pgmap v86: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:40:32 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Nov 26 11:40:33 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 5.16 deep-scrub starts
Nov 26 11:40:33 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 5.16 deep-scrub ok
Nov 26 11:40:33 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v87: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:40:33 compute-0 ceph-mon[74928]: 5.14 scrub starts
Nov 26 11:40:33 compute-0 ceph-mon[74928]: 5.14 scrub ok
Nov 26 11:40:33 compute-0 ceph-mon[74928]: 5.16 deep-scrub starts
Nov 26 11:40:33 compute-0 ceph-mon[74928]: 5.16 deep-scrub ok
Nov 26 11:40:34 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Nov 26 11:40:34 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Nov 26 11:40:34 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Nov 26 11:40:34 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Nov 26 11:40:34 compute-0 ceph-mon[74928]: pgmap v87: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:40:34 compute-0 ceph-mon[74928]: 5.9 scrub starts
Nov 26 11:40:34 compute-0 ceph-mon[74928]: 5.9 scrub ok
Nov 26 11:40:35 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v88: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:40:35 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Nov 26 11:40:35 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Nov 26 11:40:35 compute-0 ceph-mon[74928]: 2.16 scrub starts
Nov 26 11:40:35 compute-0 ceph-mon[74928]: 2.16 scrub ok
Nov 26 11:40:36 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:40:36 compute-0 ceph-mon[74928]: pgmap v88: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:40:36 compute-0 ceph-mon[74928]: 2.8 scrub starts
Nov 26 11:40:36 compute-0 ceph-mon[74928]: 2.8 scrub ok
Nov 26 11:40:37 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v89: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:40:38 compute-0 ceph-mon[74928]: pgmap v89: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:40:39 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 2.d scrub starts
Nov 26 11:40:39 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 2.d scrub ok
Nov 26 11:40:39 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v90: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:40:39 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 2.b scrub starts
Nov 26 11:40:39 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 2.b scrub ok
Nov 26 11:40:39 compute-0 ceph-mon[74928]: 2.d scrub starts
Nov 26 11:40:39 compute-0 ceph-mon[74928]: 2.d scrub ok
Nov 26 11:40:40 compute-0 ceph-mon[74928]: pgmap v90: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:40:40 compute-0 ceph-mon[74928]: 2.b scrub starts
Nov 26 11:40:40 compute-0 ceph-mon[74928]: 2.b scrub ok
Nov 26 11:40:40 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Nov 26 11:40:40 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Nov 26 11:40:41 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 2.a scrub starts
Nov 26 11:40:41 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 2.a scrub ok
Nov 26 11:40:41 compute-0 ceph-mgr[75197]: [balancer INFO root] Optimize plan auto_2025-11-26_11:40:41
Nov 26 11:40:41 compute-0 ceph-mgr[75197]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 11:40:41 compute-0 ceph-mgr[75197]: [balancer INFO root] do_upmap
Nov 26 11:40:41 compute-0 ceph-mgr[75197]: [balancer INFO root] pools ['images', '.mgr', 'vms', 'cephfs.cephfs.meta', '.rgw.root', 'volumes', 'default.rgw.log', 'cephfs.cephfs.data', 'backups', 'default.rgw.meta', 'default.rgw.control']
Nov 26 11:40:41 compute-0 ceph-mgr[75197]: [balancer INFO root] prepared 0/10 changes
Nov 26 11:40:41 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v91: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:40:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 11:40:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:40:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:40:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 11:40:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 11:40:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 11:40:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 11:40:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 11:40:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 11:40:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 11:40:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:40:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:40:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 11:40:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:40:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:40:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 11:40:41 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Nov 26 11:40:41 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Nov 26 11:40:41 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:40:41 compute-0 ceph-mon[74928]: 5.10 scrub starts
Nov 26 11:40:41 compute-0 ceph-mon[74928]: 5.10 scrub ok
Nov 26 11:40:41 compute-0 ceph-mon[74928]: 2.a scrub starts
Nov 26 11:40:41 compute-0 ceph-mon[74928]: 2.a scrub ok
Nov 26 11:40:42 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Nov 26 11:40:42 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Nov 26 11:40:42 compute-0 ceph-mon[74928]: pgmap v91: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:40:42 compute-0 ceph-mon[74928]: 5.15 scrub starts
Nov 26 11:40:42 compute-0 ceph-mon[74928]: 5.15 scrub ok
Nov 26 11:40:43 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v92: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:40:43 compute-0 ceph-mon[74928]: 2.1f scrub starts
Nov 26 11:40:43 compute-0 ceph-mon[74928]: 2.1f scrub ok
Nov 26 11:40:44 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Nov 26 11:40:44 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Nov 26 11:40:44 compute-0 ceph-mon[74928]: pgmap v92: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:40:44 compute-0 ceph-mon[74928]: 2.3 scrub starts
Nov 26 11:40:44 compute-0 ceph-mon[74928]: 2.3 scrub ok
Nov 26 11:40:44 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 11:40:44 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:40:44 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 11:40:44 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:40:44 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:40:44 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:40:44 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:40:44 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:40:44 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:40:44 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:40:44 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:40:44 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:40:44 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 1)
Nov 26 11:40:44 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:40:44 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 26 11:40:44 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:40:44 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 1)
Nov 26 11:40:44 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:40:44 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 1)
Nov 26 11:40:44 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:40:44 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 26 11:40:44 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:40:44 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Nov 26 11:40:44 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} v 0) v1
Nov 26 11:40:44 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Nov 26 11:40:45 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v93: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:40:45 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Nov 26 11:40:45 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Nov 26 11:40:45 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Nov 26 11:40:45 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Nov 26 11:40:45 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Nov 26 11:40:45 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Nov 26 11:40:45 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Nov 26 11:40:45 compute-0 ceph-mgr[75197]: [progress INFO root] update: starting ev 691cddf2-9f32-47c1-ba4c-246ffc902334 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Nov 26 11:40:45 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0) v1
Nov 26 11:40:45 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 11:40:45 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Nov 26 11:40:45 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Nov 26 11:40:46 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:40:46 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Nov 26 11:40:46 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Nov 26 11:40:46 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Nov 26 11:40:46 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Nov 26 11:40:46 compute-0 ceph-mgr[75197]: [progress INFO root] update: starting ev 44db03bb-2019-4ba2-abe4-e4bd73112345 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Nov 26 11:40:46 compute-0 ceph-mon[74928]: pgmap v93: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:40:46 compute-0 ceph-mon[74928]: 5.5 scrub starts
Nov 26 11:40:46 compute-0 ceph-mon[74928]: 5.5 scrub ok
Nov 26 11:40:46 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Nov 26 11:40:46 compute-0 ceph-mon[74928]: osdmap e41: 3 total, 3 up, 3 in
Nov 26 11:40:46 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 11:40:46 compute-0 ceph-mon[74928]: 5.17 scrub starts
Nov 26 11:40:46 compute-0 ceph-mon[74928]: 5.17 scrub ok
Nov 26 11:40:46 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0) v1
Nov 26 11:40:46 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 11:40:46 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 5.1b deep-scrub starts
Nov 26 11:40:46 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 5.1b deep-scrub ok
Nov 26 11:40:47 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v96: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:40:47 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 26 11:40:47 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 11:40:47 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0) v1
Nov 26 11:40:47 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Nov 26 11:40:47 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Nov 26 11:40:47 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Nov 26 11:40:47 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 11:40:47 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Nov 26 11:40:47 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Nov 26 11:40:47 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Nov 26 11:40:47 compute-0 ceph-mgr[75197]: [progress INFO root] update: starting ev ae4a2675-12ff-46c1-a7d3-971028e667ab (PG autoscaler increasing pool 8 PGs from 1 to 32)
Nov 26 11:40:47 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Nov 26 11:40:47 compute-0 ceph-mon[74928]: osdmap e42: 3 total, 3 up, 3 in
Nov 26 11:40:47 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 11:40:47 compute-0 ceph-mon[74928]: 5.1b deep-scrub starts
Nov 26 11:40:47 compute-0 ceph-mon[74928]: 5.1b deep-scrub ok
Nov 26 11:40:47 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 11:40:47 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Nov 26 11:40:47 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0) v1
Nov 26 11:40:47 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 11:40:47 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Nov 26 11:40:47 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Nov 26 11:40:48 compute-0 sudo[103214]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hexnznhocaxuzbvattscyhnbfoeyacvo ; /usr/bin/python3'
Nov 26 11:40:48 compute-0 sudo[103214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:40:48 compute-0 python3[103216]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:40:48 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 43 pg[7.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=43 pruub=15.426686287s) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active pruub 99.249382019s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:48 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 43 pg[7.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=43 pruub=15.426686287s) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown pruub 99.249382019s@ mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:48 compute-0 podman[103217]: 2025-11-26 11:40:48.34457838 +0000 UTC m=+0.026553592 container create 7cb2891052cc1aa7768a0bdaf7c4480c4978097bb8f4c37053266006610674d6 (image=quay.io/ceph/ceph:v18, name=lucid_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 26 11:40:48 compute-0 systemd[76422]: Starting Mark boot as successful...
Nov 26 11:40:48 compute-0 systemd[1]: Started libpod-conmon-7cb2891052cc1aa7768a0bdaf7c4480c4978097bb8f4c37053266006610674d6.scope.
Nov 26 11:40:48 compute-0 systemd[76422]: Finished Mark boot as successful.
Nov 26 11:40:48 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:40:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96c61ddec86202301dfd9fd2802c69b1c22490d4b557d279797e5866db8dc3ab/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:40:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96c61ddec86202301dfd9fd2802c69b1c22490d4b557d279797e5866db8dc3ab/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:40:48 compute-0 podman[103217]: 2025-11-26 11:40:48.380084 +0000 UTC m=+0.062059223 container init 7cb2891052cc1aa7768a0bdaf7c4480c4978097bb8f4c37053266006610674d6 (image=quay.io/ceph/ceph:v18, name=lucid_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:40:48 compute-0 podman[103217]: 2025-11-26 11:40:48.384315183 +0000 UTC m=+0.066290395 container start 7cb2891052cc1aa7768a0bdaf7c4480c4978097bb8f4c37053266006610674d6 (image=quay.io/ceph/ceph:v18, name=lucid_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:40:48 compute-0 podman[103217]: 2025-11-26 11:40:48.385340842 +0000 UTC m=+0.067316053 container attach 7cb2891052cc1aa7768a0bdaf7c4480c4978097bb8f4c37053266006610674d6 (image=quay.io/ceph/ceph:v18, name=lucid_vaughan, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 26 11:40:48 compute-0 podman[103217]: 2025-11-26 11:40:48.333760745 +0000 UTC m=+0.015735967 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:40:48 compute-0 lucid_vaughan[103230]: could not fetch user info: no user info saved
Nov 26 11:40:48 compute-0 systemd[1]: libpod-7cb2891052cc1aa7768a0bdaf7c4480c4978097bb8f4c37053266006610674d6.scope: Deactivated successfully.
Nov 26 11:40:48 compute-0 podman[103217]: 2025-11-26 11:40:48.475467904 +0000 UTC m=+0.157443117 container died 7cb2891052cc1aa7768a0bdaf7c4480c4978097bb8f4c37053266006610674d6 (image=quay.io/ceph/ceph:v18, name=lucid_vaughan, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:40:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-96c61ddec86202301dfd9fd2802c69b1c22490d4b557d279797e5866db8dc3ab-merged.mount: Deactivated successfully.
Nov 26 11:40:48 compute-0 podman[103217]: 2025-11-26 11:40:48.497399223 +0000 UTC m=+0.179374435 container remove 7cb2891052cc1aa7768a0bdaf7c4480c4978097bb8f4c37053266006610674d6 (image=quay.io/ceph/ceph:v18, name=lucid_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 26 11:40:48 compute-0 systemd[1]: libpod-conmon-7cb2891052cc1aa7768a0bdaf7c4480c4978097bb8f4c37053266006610674d6.scope: Deactivated successfully.
Nov 26 11:40:48 compute-0 sudo[103214]: pam_unix(sudo:session): session closed for user root
Nov 26 11:40:48 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Nov 26 11:40:48 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Nov 26 11:40:48 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Nov 26 11:40:48 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Nov 26 11:40:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 43 pg[6.0( v 37'39 (0'0,37'39] local-lis/les=21/22 n=22 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=43 pruub=14.112607956s) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 lcod 33'38 mlcod 33'38 active pruub 101.724754333s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:48 compute-0 ceph-mgr[75197]: [progress INFO root] update: starting ev f8de755e-3db3-459d-bdb8-a5db0bb665f3 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Nov 26 11:40:48 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0) v1
Nov 26 11:40:48 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 11:40:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 44 pg[6.0( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=1 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=43 pruub=14.112607956s) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 lcod 33'38 mlcod 0'0 unknown pruub 101.724754333s@ mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 44 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=2 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 44 pg[6.4( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=2 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 44 pg[6.5( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=2 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 44 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 44 pg[6.1( v 37'39 (0'0,37'39] local-lis/les=21/22 n=2 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 44 pg[6.2( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=2 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 44 pg[6.6( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=2 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 44 pg[6.7( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 44 pg[6.8( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 44 pg[6.a( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 44 pg[6.c( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 44 pg[6.d( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 44 pg[6.e( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 44 pg[6.f( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:48 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 44 pg[6.9( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:48 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 44 pg[7.16( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:48 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 44 pg[7.19( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:48 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 44 pg[7.1c( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:48 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 44 pg[7.15( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:48 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:48 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 44 pg[7.1e( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:48 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 44 pg[7.1d( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:48 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 44 pg[7.12( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:48 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 44 pg[7.11( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:48 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 44 pg[7.10( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:48 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 44 pg[7.13( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:48 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 44 pg[7.17( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:48 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 44 pg[7.14( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:48 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 44 pg[7.b( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:48 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:48 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:48 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:48 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:48 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:48 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:48 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 44 pg[7.7( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:48 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:48 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:48 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:48 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:48 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 44 pg[7.d( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:48 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:48 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:48 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:48 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:48 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:48 compute-0 sudo[103348]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zosqzkgjmubupzmvcydcbbfbabjjiota ; /usr/bin/python3'
Nov 26 11:40:48 compute-0 ceph-mon[74928]: pgmap v96: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:40:48 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Nov 26 11:40:48 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 11:40:48 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Nov 26 11:40:48 compute-0 ceph-mon[74928]: osdmap e43: 3 total, 3 up, 3 in
Nov 26 11:40:48 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 11:40:48 compute-0 ceph-mon[74928]: 5.1c scrub starts
Nov 26 11:40:48 compute-0 ceph-mon[74928]: 5.1c scrub ok
Nov 26 11:40:48 compute-0 sudo[103348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:40:48 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 44 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:48 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 44 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:48 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 44 pg[7.19( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:48 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:48 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 44 pg[7.1d( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:48 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 44 pg[7.1e( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:48 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 44 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:48 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 44 pg[7.12( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:48 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 44 pg[7.10( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:48 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 44 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:48 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 44 pg[7.17( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:48 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 44 pg[7.14( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:48 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 44 pg[7.b( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:48 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:48 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:48 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:48 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:48 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:48 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 44 pg[7.0( empty local-lis/les=43/44 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:48 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:48 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:48 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:48 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:48 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:48 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 44 pg[7.d( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:48 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:48 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:48 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:48 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:48 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:48 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 44 pg[7.7( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:48 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 44 pg[7.16( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:48 compute-0 python3[103350]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:40:48 compute-0 podman[103351]: 2025-11-26 11:40:48.774710425 +0000 UTC m=+0.028392365 container create af68ad3ff2b5cfd7805bc778d7f159f257bb5ed711377bb4016803a6fa369524 (image=quay.io/ceph/ceph:v18, name=sharp_nobel, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:40:48 compute-0 systemd[1]: Started libpod-conmon-af68ad3ff2b5cfd7805bc778d7f159f257bb5ed711377bb4016803a6fa369524.scope.
Nov 26 11:40:48 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:40:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f18d7814592e8800c3ab042ead73ecc1d4dece3d02f364b6357c433e4e430720/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:40:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f18d7814592e8800c3ab042ead73ecc1d4dece3d02f364b6357c433e4e430720/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:40:48 compute-0 podman[103351]: 2025-11-26 11:40:48.825782602 +0000 UTC m=+0.079464532 container init af68ad3ff2b5cfd7805bc778d7f159f257bb5ed711377bb4016803a6fa369524 (image=quay.io/ceph/ceph:v18, name=sharp_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:40:48 compute-0 podman[103351]: 2025-11-26 11:40:48.829535812 +0000 UTC m=+0.083217742 container start af68ad3ff2b5cfd7805bc778d7f159f257bb5ed711377bb4016803a6fa369524 (image=quay.io/ceph/ceph:v18, name=sharp_nobel, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 26 11:40:48 compute-0 podman[103351]: 2025-11-26 11:40:48.830750587 +0000 UTC m=+0.084432538 container attach af68ad3ff2b5cfd7805bc778d7f159f257bb5ed711377bb4016803a6fa369524 (image=quay.io/ceph/ceph:v18, name=sharp_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 26 11:40:48 compute-0 podman[103351]: 2025-11-26 11:40:48.762804745 +0000 UTC m=+0.016486694 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 11:40:48 compute-0 sharp_nobel[103363]: {
Nov 26 11:40:48 compute-0 sharp_nobel[103363]:     "user_id": "openstack",
Nov 26 11:40:48 compute-0 sharp_nobel[103363]:     "display_name": "openstack",
Nov 26 11:40:48 compute-0 sharp_nobel[103363]:     "email": "",
Nov 26 11:40:48 compute-0 sharp_nobel[103363]:     "suspended": 0,
Nov 26 11:40:48 compute-0 sharp_nobel[103363]:     "max_buckets": 1000,
Nov 26 11:40:48 compute-0 sharp_nobel[103363]:     "subusers": [],
Nov 26 11:40:48 compute-0 sharp_nobel[103363]:     "keys": [
Nov 26 11:40:48 compute-0 sharp_nobel[103363]:         {
Nov 26 11:40:48 compute-0 sharp_nobel[103363]:             "user": "openstack",
Nov 26 11:40:48 compute-0 sharp_nobel[103363]:             "access_key": "OTFKEUYIKBLVJMZX36PF",
Nov 26 11:40:48 compute-0 sharp_nobel[103363]:             "secret_key": "BNQXqnt2beuaw6dXkgTe7LZoryXFGFm3rKPvyC6A"
Nov 26 11:40:48 compute-0 sharp_nobel[103363]:         }
Nov 26 11:40:48 compute-0 sharp_nobel[103363]:     ],
Nov 26 11:40:48 compute-0 sharp_nobel[103363]:     "swift_keys": [],
Nov 26 11:40:48 compute-0 sharp_nobel[103363]:     "caps": [],
Nov 26 11:40:48 compute-0 sharp_nobel[103363]:     "op_mask": "read, write, delete",
Nov 26 11:40:48 compute-0 sharp_nobel[103363]:     "default_placement": "",
Nov 26 11:40:48 compute-0 sharp_nobel[103363]:     "default_storage_class": "",
Nov 26 11:40:48 compute-0 sharp_nobel[103363]:     "placement_tags": [],
Nov 26 11:40:48 compute-0 sharp_nobel[103363]:     "bucket_quota": {
Nov 26 11:40:48 compute-0 sharp_nobel[103363]:         "enabled": false,
Nov 26 11:40:48 compute-0 sharp_nobel[103363]:         "check_on_raw": false,
Nov 26 11:40:48 compute-0 sharp_nobel[103363]:         "max_size": -1,
Nov 26 11:40:48 compute-0 sharp_nobel[103363]:         "max_size_kb": 0,
Nov 26 11:40:48 compute-0 sharp_nobel[103363]:         "max_objects": -1
Nov 26 11:40:48 compute-0 sharp_nobel[103363]:     },
Nov 26 11:40:48 compute-0 sharp_nobel[103363]:     "user_quota": {
Nov 26 11:40:48 compute-0 sharp_nobel[103363]:         "enabled": false,
Nov 26 11:40:48 compute-0 sharp_nobel[103363]:         "check_on_raw": false,
Nov 26 11:40:48 compute-0 sharp_nobel[103363]:         "max_size": -1,
Nov 26 11:40:48 compute-0 sharp_nobel[103363]:         "max_size_kb": 0,
Nov 26 11:40:48 compute-0 sharp_nobel[103363]:         "max_objects": -1
Nov 26 11:40:48 compute-0 sharp_nobel[103363]:     },
Nov 26 11:40:48 compute-0 sharp_nobel[103363]:     "temp_url_keys": [],
Nov 26 11:40:48 compute-0 sharp_nobel[103363]:     "type": "rgw",
Nov 26 11:40:48 compute-0 sharp_nobel[103363]:     "mfa_ids": []
Nov 26 11:40:48 compute-0 sharp_nobel[103363]: }
Nov 26 11:40:48 compute-0 sharp_nobel[103363]: 
Nov 26 11:40:48 compute-0 systemd[1]: libpod-af68ad3ff2b5cfd7805bc778d7f159f257bb5ed711377bb4016803a6fa369524.scope: Deactivated successfully.
Nov 26 11:40:48 compute-0 podman[103448]: 2025-11-26 11:40:48.959153585 +0000 UTC m=+0.018548801 container died af68ad3ff2b5cfd7805bc778d7f159f257bb5ed711377bb4016803a6fa369524 (image=quay.io/ceph/ceph:v18, name=sharp_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 26 11:40:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-f18d7814592e8800c3ab042ead73ecc1d4dece3d02f364b6357c433e4e430720-merged.mount: Deactivated successfully.
Nov 26 11:40:48 compute-0 podman[103448]: 2025-11-26 11:40:48.975777679 +0000 UTC m=+0.035172894 container remove af68ad3ff2b5cfd7805bc778d7f159f257bb5ed711377bb4016803a6fa369524 (image=quay.io/ceph/ceph:v18, name=sharp_nobel, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:40:48 compute-0 systemd[1]: libpod-conmon-af68ad3ff2b5cfd7805bc778d7f159f257bb5ed711377bb4016803a6fa369524.scope: Deactivated successfully.
Nov 26 11:40:48 compute-0 sudo[103348]: pam_unix(sudo:session): session closed for user root
Nov 26 11:40:49 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v99: 181 pgs: 15 unknown, 166 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Nov 26 11:40:49 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 26 11:40:49 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 11:40:49 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 26 11:40:49 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 11:40:49 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Nov 26 11:40:49 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Nov 26 11:40:49 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 11:40:49 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 11:40:49 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Nov 26 11:40:49 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Nov 26 11:40:49 compute-0 ceph-mgr[75197]: [progress INFO root] update: starting ev 299e03e3-22a9-48da-8f55-a1df619d7b4d (PG autoscaler increasing pool 10 PGs from 1 to 32)
Nov 26 11:40:49 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 45 pg[9.0( v 44'389 (0'0,44'389] local-lis/les=34/35 n=177 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=45 pruub=14.475891113s) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 44'388 mlcod 44'388 active pruub 99.602706909s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:49 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 45 pg[8.0( v 33'4 (0'0,33'4] local-lis/les=32/33 n=4 ec=32/32 lis/c=32/32 les/c/f=33/33/0 sis=45 pruub=12.471073151s) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 33'3 mlcod 33'3 active pruub 97.598121643s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:49 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0) v1
Nov 26 11:40:49 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 11:40:49 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 45 pg[8.0( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=32/32 lis/c=32/32 les/c/f=33/33/0 sis=45 pruub=12.471073151s) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 33'3 mlcod 0'0 unknown pruub 97.598121643s@ mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:49 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 45 pg[9.0( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=5 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=45 pruub=14.475891113s) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 44'388 mlcod 0'0 unknown pruub 99.602706909s@ mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:49 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 45 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:49 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 45 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:49 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 45 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:49 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 45 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:49 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 45 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:49 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 45 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:49 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 45 pg[6.1( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:49 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 45 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:49 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 45 pg[6.0( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 lcod 33'38 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:49 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 45 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:49 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 45 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:49 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 45 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:49 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 45 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:49 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 45 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:49 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 45 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:49 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 45 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:49 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Nov 26 11:40:49 compute-0 ceph-mon[74928]: osdmap e44: 3 total, 3 up, 3 in
Nov 26 11:40:49 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 11:40:49 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 11:40:49 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 11:40:49 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Nov 26 11:40:49 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 11:40:49 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 11:40:49 compute-0 ceph-mon[74928]: osdmap e45: 3 total, 3 up, 3 in
Nov 26 11:40:49 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 11:40:50 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 2.f scrub starts
Nov 26 11:40:50 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 2.f scrub ok
Nov 26 11:40:50 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Nov 26 11:40:50 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Nov 26 11:40:50 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Nov 26 11:40:50 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Nov 26 11:40:50 compute-0 ceph-mgr[75197]: [progress INFO root] update: starting ev 5a40cbcf-e840-4c44-b8fc-8dda8599c099 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Nov 26 11:40:50 compute-0 ceph-mgr[75197]: [progress INFO root] complete: finished ev 691cddf2-9f32-47c1-ba4c-246ffc902334 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Nov 26 11:40:50 compute-0 ceph-mgr[75197]: [progress INFO root] Completed event 691cddf2-9f32-47c1-ba4c-246ffc902334 (PG autoscaler increasing pool 6 PGs from 1 to 16) in 5 seconds
Nov 26 11:40:50 compute-0 ceph-mgr[75197]: [progress INFO root] complete: finished ev 44db03bb-2019-4ba2-abe4-e4bd73112345 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Nov 26 11:40:50 compute-0 ceph-mgr[75197]: [progress INFO root] Completed event 44db03bb-2019-4ba2-abe4-e4bd73112345 (PG autoscaler increasing pool 7 PGs from 1 to 32) in 4 seconds
Nov 26 11:40:50 compute-0 ceph-mgr[75197]: [progress INFO root] complete: finished ev ae4a2675-12ff-46c1-a7d3-971028e667ab (PG autoscaler increasing pool 8 PGs from 1 to 32)
Nov 26 11:40:50 compute-0 ceph-mgr[75197]: [progress INFO root] Completed event ae4a2675-12ff-46c1-a7d3-971028e667ab (PG autoscaler increasing pool 8 PGs from 1 to 32) in 3 seconds
Nov 26 11:40:50 compute-0 ceph-mgr[75197]: [progress INFO root] complete: finished ev f8de755e-3db3-459d-bdb8-a5db0bb665f3 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Nov 26 11:40:50 compute-0 ceph-mgr[75197]: [progress INFO root] Completed event f8de755e-3db3-459d-bdb8-a5db0bb665f3 (PG autoscaler increasing pool 9 PGs from 1 to 32) in 2 seconds
Nov 26 11:40:50 compute-0 ceph-mgr[75197]: [progress INFO root] complete: finished ev 299e03e3-22a9-48da-8f55-a1df619d7b4d (PG autoscaler increasing pool 10 PGs from 1 to 32)
Nov 26 11:40:50 compute-0 ceph-mgr[75197]: [progress INFO root] Completed event 299e03e3-22a9-48da-8f55-a1df619d7b4d (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Nov 26 11:40:50 compute-0 ceph-mgr[75197]: [progress INFO root] complete: finished ev 5a40cbcf-e840-4c44-b8fc-8dda8599c099 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Nov 26 11:40:50 compute-0 ceph-mgr[75197]: [progress INFO root] Completed event 5a40cbcf-e840-4c44-b8fc-8dda8599c099 (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[8.14( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[8.15( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[9.16( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[8.17( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[9.15( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[9.11( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[8.10( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[8.1( v 33'4 (0'0,33'4] local-lis/les=32/33 n=1 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[8.2( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=1 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[9.3( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[8.3( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=1 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[9.2( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[8.c( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[9.d( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[8.d( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[9.c( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[8.e( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[9.f( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[8.8( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[9.9( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[8.f( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[9.e( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[8.b( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[9.a( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[8.9( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[9.8( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[9.1( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[8.7( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[9.6( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[8.6( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[9.7( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[9.4( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[8.5( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[8.4( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=1 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[9.5( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[9.1a( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[8.1b( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[9.19( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[8.18( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[9.1e( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[8.1f( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[9.1f( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[8.1e( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[9.1c( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[9.13( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[9.14( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[9.1d( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[8.12( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[9.10( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[8.11( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[8.a( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[9.b( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[9.1b( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[8.1d( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[8.1a( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[9.18( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[8.19( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[9.17( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[8.16( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[9.12( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[8.14( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[8.13( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[8.1c( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[8.15( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[8.17( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[8.10( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[9.0( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 44'388 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[8.1( v 33'4 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[8.2( v 33'4 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[8.3( v 33'4 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[9.2( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[8.c( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[8.d( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[8.e( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[8.8( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[8.b( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[8.f( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[9.a( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[8.9( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[8.0( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=32/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 33'3 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[8.7( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[8.6( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[9.4( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[8.5( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[8.4( v 33'4 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[9.1a( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[8.1b( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[8.18( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[8.1e( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[8.1f( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[9.14( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[9.10( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[8.a( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[8.19( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[8.1d( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[8.16( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[9.12( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[8.13( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:50 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 46 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:50 compute-0 ceph-mon[74928]: pgmap v99: 181 pgs: 15 unknown, 166 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Nov 26 11:40:50 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Nov 26 11:40:50 compute-0 ceph-mon[74928]: osdmap e46: 3 total, 3 up, 3 in
Nov 26 11:40:51 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v102: 243 pgs: 62 unknown, 181 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Nov 26 11:40:51 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 26 11:40:51 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 11:40:51 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 26 11:40:51 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 11:40:51 compute-0 ceph-mgr[75197]: [progress INFO root] Writing back 15 completed events
Nov 26 11:40:51 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 26 11:40:51 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:40:51 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:40:51 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Nov 26 11:40:51 compute-0 ceph-mon[74928]: 2.f scrub starts
Nov 26 11:40:51 compute-0 ceph-mon[74928]: 2.f scrub ok
Nov 26 11:40:51 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 11:40:51 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 11:40:51 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:40:51 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 11:40:51 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 11:40:51 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Nov 26 11:40:51 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Nov 26 11:40:51 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 47 pg[11.0( v 44'2 (0'0,44'2] local-lis/les=38/39 n=2 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=47 pruub=8.461942673s) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 44'1 mlcod 44'1 active pruub 95.623985291s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:51 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 47 pg[11.0( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=47 pruub=8.461942673s) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 44'1 mlcod 0'0 unknown pruub 95.623985291s@ mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:52 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 2.5 deep-scrub starts
Nov 26 11:40:52 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 2.1c deep-scrub starts
Nov 26 11:40:52 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 2.5 deep-scrub ok
Nov 26 11:40:52 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 2.1c deep-scrub ok
Nov 26 11:40:52 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Nov 26 11:40:52 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Nov 26 11:40:52 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Nov 26 11:40:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 48 pg[11.17( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 48 pg[11.14( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 48 pg[11.13( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 48 pg[11.2( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=1 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 48 pg[11.1( v 44'2 (0'0,44'2] local-lis/les=38/39 n=1 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 48 pg[11.16( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 48 pg[11.e( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 48 pg[11.f( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 48 pg[11.d( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 48 pg[11.b( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 48 pg[11.8( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 48 pg[11.a( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 48 pg[11.3( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 48 pg[11.c( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 48 pg[11.4( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 48 pg[11.5( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 48 pg[11.7( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 48 pg[11.6( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 48 pg[11.18( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 48 pg[11.1b( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 48 pg[11.1c( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 48 pg[11.1d( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 48 pg[11.1e( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 48 pg[11.1f( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 48 pg[11.11( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 48 pg[11.12( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 48 pg[11.9( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 48 pg[11.19( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 48 pg[11.1a( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 48 pg[11.15( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 48 pg[11.10( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:52 compute-0 ceph-mon[74928]: pgmap v102: 243 pgs: 62 unknown, 181 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Nov 26 11:40:52 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 11:40:52 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 11:40:52 compute-0 ceph-mon[74928]: osdmap e47: 3 total, 3 up, 3 in
Nov 26 11:40:52 compute-0 ceph-mon[74928]: 2.5 deep-scrub starts
Nov 26 11:40:52 compute-0 ceph-mon[74928]: 2.5 deep-scrub ok
Nov 26 11:40:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 48 pg[11.17( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 48 pg[11.14( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 48 pg[11.13( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 48 pg[11.0( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 44'1 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 48 pg[11.1( v 44'2 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 48 pg[11.2( v 44'2 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 48 pg[11.f( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 48 pg[11.d( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 48 pg[11.16( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 48 pg[11.b( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 48 pg[11.8( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 48 pg[11.a( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 48 pg[11.e( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 48 pg[11.3( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 48 pg[11.c( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 48 pg[11.4( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 48 pg[11.5( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 48 pg[11.6( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 48 pg[11.18( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 48 pg[11.7( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 48 pg[11.1c( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 48 pg[11.1d( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 48 pg[11.1e( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 48 pg[11.1f( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 48 pg[11.11( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 48 pg[11.12( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 48 pg[11.9( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 48 pg[11.1b( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 48 pg[11.1a( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 48 pg[11.15( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 48 pg[11.10( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:52 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 48 pg[11.19( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:52 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 5.1f deep-scrub starts
Nov 26 11:40:52 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 5.1f deep-scrub ok
Nov 26 11:40:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.0( v 44'64 (0'0,44'64] local-lis/les=36/37 n=8 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47 pruub=12.765938759s) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 44'63 active pruub 97.784805298s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.0( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47 pruub=12.765938759s) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 unknown pruub 97.784805298s@ mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.18( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.14( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.9( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.3( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.15( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.5( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:53 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v105: 305 pgs: 124 unknown, 181 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:40:53 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 2.7 deep-scrub starts
Nov 26 11:40:53 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 2.7 deep-scrub ok
Nov 26 11:40:53 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Nov 26 11:40:53 compute-0 ceph-mon[74928]: 2.1c deep-scrub starts
Nov 26 11:40:53 compute-0 ceph-mon[74928]: 2.1c deep-scrub ok
Nov 26 11:40:53 compute-0 ceph-mon[74928]: osdmap e48: 3 total, 3 up, 3 in
Nov 26 11:40:53 compute-0 ceph-mon[74928]: 5.1f deep-scrub starts
Nov 26 11:40:53 compute-0 ceph-mon[74928]: 5.1f deep-scrub ok
Nov 26 11:40:53 compute-0 ceph-mon[74928]: 2.7 deep-scrub starts
Nov 26 11:40:53 compute-0 ceph-mon[74928]: 2.7 deep-scrub ok
Nov 26 11:40:53 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Nov 26 11:40:53 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Nov 26 11:40:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.1b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.3( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.1f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.1c( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.1d( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.5( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.9( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.c( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.0( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.15( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.18( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.14( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.d( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:40:54 compute-0 ceph-mon[74928]: pgmap v105: 305 pgs: 124 unknown, 181 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:40:54 compute-0 ceph-mon[74928]: osdmap e49: 3 total, 3 up, 3 in
Nov 26 11:40:55 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v107: 305 pgs: 31 unknown, 274 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 642 B/s rd, 428 B/s wr, 1 op/s
Nov 26 11:40:55 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Nov 26 11:40:55 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Nov 26 11:40:56 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e49 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:40:56 compute-0 ceph-mon[74928]: pgmap v107: 305 pgs: 31 unknown, 274 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 642 B/s rd, 428 B/s wr, 1 op/s
Nov 26 11:40:56 compute-0 ceph-mon[74928]: 4.1b scrub starts
Nov 26 11:40:56 compute-0 ceph-mon[74928]: 4.1b scrub ok
Nov 26 11:40:56 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Nov 26 11:40:56 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Nov 26 11:40:57 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Nov 26 11:40:57 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Nov 26 11:40:57 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v108: 305 pgs: 31 unknown, 274 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 341 B/s wr, 0 op/s
Nov 26 11:40:57 compute-0 ceph-mon[74928]: 4.1c scrub starts
Nov 26 11:40:57 compute-0 ceph-mon[74928]: 4.1c scrub ok
Nov 26 11:40:58 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Nov 26 11:40:58 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Nov 26 11:40:58 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 2.6 deep-scrub starts
Nov 26 11:40:58 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 2.6 deep-scrub ok
Nov 26 11:40:58 compute-0 ceph-mon[74928]: 2.2 scrub starts
Nov 26 11:40:58 compute-0 ceph-mon[74928]: 2.2 scrub ok
Nov 26 11:40:58 compute-0 ceph-mon[74928]: pgmap v108: 305 pgs: 31 unknown, 274 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 341 B/s wr, 0 op/s
Nov 26 11:40:58 compute-0 ceph-mon[74928]: 2.6 deep-scrub starts
Nov 26 11:40:58 compute-0 ceph-mon[74928]: 2.6 deep-scrub ok
Nov 26 11:40:58 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 4.a deep-scrub starts
Nov 26 11:40:58 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 4.a deep-scrub ok
Nov 26 11:40:59 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Nov 26 11:40:59 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Nov 26 11:40:59 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v109: 305 pgs: 305 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 396 B/s rd, 264 B/s wr, 0 op/s
Nov 26 11:40:59 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 26 11:40:59 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 11:40:59 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 26 11:40:59 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 11:40:59 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} v 0) v1
Nov 26 11:40:59 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 26 11:40:59 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 26 11:40:59 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 11:40:59 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0) v1
Nov 26 11:40:59 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 26 11:40:59 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 26 11:40:59 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 11:40:59 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Nov 26 11:40:59 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 11:40:59 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 11:40:59 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 26 11:40:59 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 11:40:59 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 26 11:40:59 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 11:40:59 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Nov 26 11:40:59 compute-0 ceph-mon[74928]: 2.1d scrub starts
Nov 26 11:40:59 compute-0 ceph-mon[74928]: 2.1d scrub ok
Nov 26 11:40:59 compute-0 ceph-mon[74928]: 4.a deep-scrub starts
Nov 26 11:40:59 compute-0 ceph-mon[74928]: 4.a deep-scrub ok
Nov 26 11:40:59 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 11:40:59 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 11:40:59 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 26 11:40:59 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 11:40:59 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 26 11:40:59 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 11:40:59 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50 pruub=13.937854767s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 112.620117188s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50 pruub=13.937774658s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 112.620079041s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50 pruub=13.937814713s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 112.620117188s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50 pruub=13.937750816s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 112.620079041s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50 pruub=13.937582970s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 112.620094299s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50 pruub=13.937567711s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 112.620094299s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50 pruub=13.938472748s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 112.621139526s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50 pruub=13.938455582s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 112.621139526s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50 pruub=13.938433647s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 112.621215820s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50 pruub=13.938420296s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 112.621215820s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50 pruub=13.938388824s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 112.621246338s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50 pruub=13.938374519s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 112.621246338s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50 pruub=13.938362122s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 112.621253967s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50 pruub=13.938333511s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 112.621253967s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[6.1( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50 pruub=13.938465118s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 112.621131897s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[6.1( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50 pruub=13.938138962s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 112.621131897s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[6.9( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[6.7( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[6.5( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[6.1( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[6.3( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.17( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.972146988s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 104.173072815s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.17( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.972130775s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.173072815s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.935024261s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 110.136032104s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.935012817s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.136032104s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.950508118s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active pruub 108.151596069s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.950497627s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.151596069s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.14( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.932977676s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 110.134140015s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.14( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.932966232s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.134140015s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.950338364s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active pruub 108.151618958s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.950325966s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.151618958s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.15( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.934611320s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 110.136009216s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.15( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.934591293s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.136009216s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.14( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.975613594s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 104.177124023s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.14( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.975599289s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177124023s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.949968338s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active pruub 108.151588440s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.934436798s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 110.136062622s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.949954987s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.151588440s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.934420586s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.136062622s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.949865341s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active pruub 108.151580811s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.10( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.934347153s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 110.136062622s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.949852943s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.151580811s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.10( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.934334755s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.136062622s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.2( v 44'2 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.975439072s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 104.177230835s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.2( v 44'2 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.975426674s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177230835s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.949805260s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active pruub 108.151626587s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.949793816s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.151626587s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.1( v 44'2 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.975353241s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 104.177223206s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.1( v 44'2 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.975332260s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177223206s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.934107780s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 110.136100769s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.2( v 33'4 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.934098244s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 110.136093140s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.2( v 33'4 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.934068680s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.136093140s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.949538231s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active pruub 108.151618958s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.949526787s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.151618958s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.f( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.975136757s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 104.177238464s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.f( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.975124359s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177238464s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.949336052s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active pruub 108.151496887s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.949324608s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.151496887s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.c( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.934011459s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 110.136199951s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.c( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.933999062s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.136199951s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.933954239s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 110.136207581s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.933941841s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.136207581s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.e( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.975097656s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 104.177375793s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.e( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.975085258s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177375793s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.949176788s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active pruub 108.151527405s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.d( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.933849335s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 110.136222839s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.d( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.933835983s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.136222839s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.933701515s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.136100769s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.d( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.974746704s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 104.177261353s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.d( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.974734306s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177261353s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.e( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.933632851s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 110.136230469s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.e( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.933620453s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.136230469s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.948804855s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active pruub 108.151420593s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.948789597s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.151420593s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.933560371s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 110.136238098s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.933547974s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.136238098s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.b( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.974533081s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 104.177276611s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.b( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.974516869s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177276611s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.933468819s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 110.136238098s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.933458328s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.136238098s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.17( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.1b( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.14( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.14( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.f( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.938960075s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 110.141853333s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.8( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.974438667s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 104.177337646s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.f( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.938944817s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.141853333s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.8( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.974425316s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177337646s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.948418617s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active pruub 108.151405334s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.948404312s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.151405334s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.b( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.938811302s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 110.141838074s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.b( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.938792229s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.141838074s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.3( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.974210739s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 104.177383423s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.9( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.938748360s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 110.141929626s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.3( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.974196434s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177383423s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.9( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.938733101s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.141929626s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.948089600s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active pruub 108.151390076s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.949161530s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.151527405s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.938595772s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 110.141952515s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.4( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.973976135s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 104.177398682s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.938531876s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.141952515s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.4( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.973961830s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177398682s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.947900772s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.151390076s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.947836876s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active pruub 108.151382446s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.947820663s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.151382446s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.947785378s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active pruub 108.151374817s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.947771072s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.151374817s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.938490868s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 110.142166138s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.6( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.938279152s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 110.141952515s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.938475609s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142166138s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.6( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.938260078s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.141952515s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.6( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.973668098s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 104.177429199s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.947587013s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active pruub 108.151351929s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.6( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.973655701s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177429199s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.947567940s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.151351929s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.4( v 33'4 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.938311577s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 110.142189026s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.4( v 33'4 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.938300133s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142189026s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.938292503s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 110.142189026s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.18( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.18( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.973497391s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 104.177444458s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.18( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.973485947s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177444458s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.938279152s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142189026s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.1b( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.938146591s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 110.142196655s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.1b( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.938131332s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142196655s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.1b( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.973366737s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 104.177444458s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.1b( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.973312378s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177444458s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.18( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.938104630s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 110.142356873s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.1c( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.973200798s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 104.177467346s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.18( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.938089371s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142356873s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.1c( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.973186493s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177467346s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.1f( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937932968s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 110.142341614s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.1f( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937918663s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142341614s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937891006s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 110.142333984s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937874794s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142333984s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.939662933s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active pruub 108.144172668s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.939648628s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.144172668s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.1e( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.972956657s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 104.177490234s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.1d( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937980652s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 110.142601013s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.1d( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937968254s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142601013s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.1f( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.972817421s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 104.177497864s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.1f( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.972800255s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177497864s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937825203s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 110.142539978s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937811852s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142539978s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.939379692s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active pruub 108.144195557s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937810898s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 110.142639160s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.939366341s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.144195557s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937796593s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142639160s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.1e( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.972943306s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177490234s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.11( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.972625732s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 104.177513123s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.11( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.972613335s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177513123s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937573433s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 110.142532349s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937561035s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142532349s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937503815s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 110.142539978s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.12( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.972482681s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 104.177513123s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.12( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.972465515s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177513123s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937141418s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 110.142250061s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937127113s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142250061s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937415123s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142539978s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937284470s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 110.142539978s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937266350s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142539978s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.9( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.972103119s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 104.177536011s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.9( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.972084999s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177536011s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.938540459s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active pruub 108.144149780s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.938523293s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.144149780s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.936826706s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 110.142555237s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.936812401s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142555237s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.19( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.971720695s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 104.177536011s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.19( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.971708298s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177536011s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.936662674s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 110.142562866s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.936652184s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142562866s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.938215256s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active pruub 108.144203186s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.938200951s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.144203186s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.936482430s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 110.142585754s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.936468124s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142585754s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.1a( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.971364975s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 104.177566528s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.1a( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.971353531s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177566528s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.15( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.971140862s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 104.177566528s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.15( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.971126556s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177566528s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.936058044s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 110.142646790s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.936043739s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142646790s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.10( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.970899582s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 104.177581787s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.10( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.970885277s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177581787s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.937348366s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active pruub 108.144134521s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.937335014s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.144134521s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.942935944s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active pruub 108.151405334s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.942917824s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.151405334s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.11( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.1f( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.10( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.1( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.f( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.3( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.c( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.e( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.3( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.e( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.9( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.f( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.4( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.b( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.9( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.1( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.f( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.9( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.6( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.6( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.5( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.4( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.18( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.1f( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.1d( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.1d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.13( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.19( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.1a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.15( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.15( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.3( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.2( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.d( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.1( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.8( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.2( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.d( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.9( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.5( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.8( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.4( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.15( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.11( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1c( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1e( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.11( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.12( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.1c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.12( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.11( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.e( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.1b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.1a( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.10( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.6( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.2( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.18( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.1b( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1a( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1f( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.1c( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965672493s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 101.354835510s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965634346s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.354835510s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965537071s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 101.354843140s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965523720s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.354843140s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.967114449s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 44'64 active pruub 101.356491089s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.967093468s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 101.356491089s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965385437s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 101.354843140s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965375900s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.354843140s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965397835s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 101.354949951s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965386391s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.354949951s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965286255s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 101.354904175s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965275764s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.354904175s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965224266s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 101.354919434s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965213776s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.354919434s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965167046s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 101.354927063s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965156555s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.354927063s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965708733s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 101.355575562s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965698242s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.355575562s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965760231s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 101.355712891s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965748787s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.355712891s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965558052s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 101.355590820s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965546608s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.355590820s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965547562s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 101.355667114s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965536118s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.355667114s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965463638s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 101.355682373s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965453148s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.355682373s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965367317s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 101.355697632s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965354919s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.355697632s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965303421s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 44'64 active pruub 101.355712891s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965286255s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 101.355712891s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965250969s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 101.355758667s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965239525s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.355758667s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965145111s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 101.355773926s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965133667s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.355773926s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965089798s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 44'64 active pruub 101.355735779s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965060234s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 101.355735779s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965779305s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 44'64 active pruub 101.356475830s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965764999s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 101.356475830s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.964997292s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 44'64 active pruub 101.355789185s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.964989662s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 101.355796814s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.964977264s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 101.355789185s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.964975357s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.355796814s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965548515s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 101.356452942s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:40:59 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965533257s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.356452942s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.1e( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.d( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.7( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.4( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.8( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.1( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.e( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.15( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.16( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.9( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.19( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.b( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.13( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.12( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.17( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.11( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.10( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.1a( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.6( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.f( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.2( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:40:59 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.14( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:00 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Nov 26 11:41:00 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Nov 26 11:41:00 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Nov 26 11:41:00 compute-0 ceph-mon[74928]: 5.4 scrub starts
Nov 26 11:41:00 compute-0 ceph-mon[74928]: 5.4 scrub ok
Nov 26 11:41:00 compute-0 ceph-mon[74928]: pgmap v109: 305 pgs: 305 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 396 B/s rd, 264 B/s wr, 0 op/s
Nov 26 11:41:00 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 11:41:00 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 11:41:00 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 26 11:41:00 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 11:41:00 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 26 11:41:00 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 11:41:00 compute-0 ceph-mon[74928]: osdmap e50: 3 total, 3 up, 3 in
Nov 26 11:41:00 compute-0 ceph-mon[74928]: 5.1 scrub starts
Nov 26 11:41:00 compute-0 ceph-mon[74928]: 5.1 scrub ok
Nov 26 11:41:00 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Nov 26 11:41:00 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:00 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:00 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:00 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:00 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:00 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:00 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:00 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:00 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:00 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:00 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:00 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:00 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:00 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:00 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:00 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:00 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:00 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:00 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:00 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:00 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.1b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.1b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.1d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.1d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.3( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:00 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:00 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:00 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:00 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:00 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:00 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:00 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:00 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:00 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:00 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:00 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:00 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.3( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.9( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.9( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.5( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.5( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.11( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.11( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.1( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.1( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.1f( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.1d( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.6( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.9( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.b( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.4( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.9( v 49'65 lc 44'56 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.1b( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.14( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.17( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.19( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.18( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.13( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.1( v 44'2 (0'0,44'2] local-lis/les=50/51 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.f( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.3( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.f( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.e( v 49'65 lc 44'48 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.e( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.e( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.f( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.d( v 49'65 lc 44'50 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.6( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.6( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.9( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.18( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.14( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.15( v 49'65 lc 44'46 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.4( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.10( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.1f( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.10( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.c( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.1a( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.1f( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.1b( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.b( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.2( v 44'2 (0'0,44'2] local-lis/les=50/51 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.e( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.12( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.18( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.1c( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.1e( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.11( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.15( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.1c( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.1b( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.4( v 33'4 (0'0,33'4] local-lis/les=50/51 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.8( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.a( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.11( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.5( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.d( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.2( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.8( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.1( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.d( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.2( v 33'4 (0'0,33'4] local-lis/les=50/51 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.9( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.15( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.1a( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.d( v 37'39 lc 33'10 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.1( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.5( v 37'39 lc 33'6 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.7( v 37'39 lc 33'15 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.f( v 37'39 lc 33'1 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.14( v 49'65 lc 44'54 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.c( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.3( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.15( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:00 compute-0 sshd-session[103460]: Accepted publickey for zuul from 192.168.122.30 port 56534 ssh2: ECDSA SHA256:Ri1FttUDYah1mI7cQP4QF8tE4Jqe/KzhJvhCRDggR5A
Nov 26 11:41:00 compute-0 systemd-logind[744]: New session 33 of user zuul.
Nov 26 11:41:00 compute-0 systemd[1]: Started Session 33 of User zuul.
Nov 26 11:41:00 compute-0 sshd-session[103460]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 26 11:41:01 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v112: 305 pgs: 16 unknown, 41 peering, 248 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:41:01 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:41:01 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Nov 26 11:41:01 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Nov 26 11:41:01 compute-0 ceph-mon[74928]: osdmap e51: 3 total, 3 up, 3 in
Nov 26 11:41:01 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Nov 26 11:41:01 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:01 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:01 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:01 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:01 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:01 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:01 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:01 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:01 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:01 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:01 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:01 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:01 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:01 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:01 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:01 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:01 compute-0 python3.9[103613]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 11:41:01 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Nov 26 11:41:01 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Nov 26 11:41:02 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Nov 26 11:41:02 compute-0 ceph-mon[74928]: pgmap v112: 305 pgs: 16 unknown, 41 peering, 248 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:41:02 compute-0 ceph-mon[74928]: osdmap e52: 3 total, 3 up, 3 in
Nov 26 11:41:02 compute-0 ceph-mon[74928]: 3.7 scrub starts
Nov 26 11:41:02 compute-0 ceph-mon[74928]: 3.7 scrub ok
Nov 26 11:41:02 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Nov 26 11:41:02 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Nov 26 11:41:02 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.023622513s) [0] async=[0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 113.238182068s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:02 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.023562431s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.238182068s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:02 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022793770s) [0] async=[0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 113.237480164s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:02 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022745132s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.237480164s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:02 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.023048401s) [0] async=[0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 113.237869263s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:02 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.023011208s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.237869263s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:02 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022819519s) [0] async=[0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 113.237731934s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:02 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022778511s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.237731934s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:02 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.023331642s) [0] async=[0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 113.238349915s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:02 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022944450s) [0] async=[0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 113.237998962s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:02 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.023300171s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.238349915s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:02 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022917747s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.237998962s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:02 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022416115s) [0] async=[0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 113.237792969s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:02 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022385597s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.237792969s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:02 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022971153s) [0] async=[0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 113.238403320s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:02 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022942543s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.238403320s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:02 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022498131s) [0] async=[0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 113.238121033s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:02 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022468567s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.238121033s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:02 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022225380s) [0] async=[0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 113.237945557s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:02 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022089005s) [0] async=[0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 113.237884521s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:02 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.021659851s) [0] async=[0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 113.237464905s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:02 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022052765s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.237884521s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:02 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.021580696s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.237464905s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:02 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.020052910s) [0] async=[0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 113.235961914s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:02 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.020028114s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.235961914s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:02 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022456169s) [0] async=[0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 113.238464355s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:02 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.021612167s) [0] async=[0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 113.237617493s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:02 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022437096s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.238464355s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:02 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.021568298s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.237617493s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:02 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.021500587s) [0] async=[0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 113.237632751s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:02 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.021479607s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.237632751s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:02 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.021776199s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.237945557s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:02 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:02 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:02 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:02 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:02 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:02 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:02 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:02 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:02 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:02 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:02 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:02 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:02 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:02 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:02 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:02 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:02 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:02 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:02 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:02 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:02 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:02 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:02 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:02 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:02 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:02 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:02 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:02 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:02 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:02 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:02 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:02 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:02 compute-0 sudo[103829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfzbgagtsawxbehtqsewjylpfupfptez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157262.5348976-32-193960635316835/AnsiballZ_command.py'
Nov 26 11:41:02 compute-0 sudo[103829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:41:02 compute-0 python3.9[103831]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                             pushd /var/tmp
                                             curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                             pushd repo-setup-main
                                             python3 -m venv ./venv
                                             PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                             ./venv/bin/repo-setup current-podified -b antelope
                                             popd
                                             rm -rf repo-setup-main
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:41:03 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Nov 26 11:41:03 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v115: 305 pgs: 16 unknown, 41 peering, 248 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 94 B/s, 0 objects/s recovering
Nov 26 11:41:03 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Nov 26 11:41:03 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Nov 26 11:41:03 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Nov 26 11:41:03 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Nov 26 11:41:03 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Nov 26 11:41:03 compute-0 ceph-mon[74928]: osdmap e53: 3 total, 3 up, 3 in
Nov 26 11:41:03 compute-0 ceph-mon[74928]: 2.9 scrub starts
Nov 26 11:41:03 compute-0 ceph-mon[74928]: 2.9 scrub ok
Nov 26 11:41:03 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Nov 26 11:41:03 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:03 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:03 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:03 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:03 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:03 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:03 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:03 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:03 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:03 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:03 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:03 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:03 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:03 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:03 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:03 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:04 compute-0 ceph-mon[74928]: 5.7 scrub starts
Nov 26 11:41:04 compute-0 ceph-mon[74928]: pgmap v115: 305 pgs: 16 unknown, 41 peering, 248 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 94 B/s, 0 objects/s recovering
Nov 26 11:41:04 compute-0 ceph-mon[74928]: 5.7 scrub ok
Nov 26 11:41:04 compute-0 ceph-mon[74928]: osdmap e54: 3 total, 3 up, 3 in
Nov 26 11:41:05 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v117: 305 pgs: 16 unknown, 41 peering, 248 active+clean; 454 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 249 B/s, 2 keys/s, 2 objects/s recovering
Nov 26 11:41:06 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:41:06 compute-0 ceph-mon[74928]: pgmap v117: 305 pgs: 16 unknown, 41 peering, 248 active+clean; 454 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 249 B/s, 2 keys/s, 2 objects/s recovering
Nov 26 11:41:06 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Nov 26 11:41:06 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Nov 26 11:41:07 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v118: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 2.8 KiB/s wr, 92 op/s; 800 B/s, 1 keys/s, 18 objects/s recovering
Nov 26 11:41:07 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} v 0) v1
Nov 26 11:41:07 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 26 11:41:07 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0) v1
Nov 26 11:41:07 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 26 11:41:07 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Nov 26 11:41:07 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Nov 26 11:41:07 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Nov 26 11:41:07 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 26 11:41:07 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 26 11:41:07 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Nov 26 11:41:07 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Nov 26 11:41:07 compute-0 ceph-mon[74928]: 3.5 scrub starts
Nov 26 11:41:07 compute-0 ceph-mon[74928]: 3.5 scrub ok
Nov 26 11:41:07 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 26 11:41:07 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 26 11:41:07 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 55 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55 pruub=13.900985718s) [1] r=-1 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 120.620323181s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:07 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 55 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55 pruub=13.900940895s) [1] r=-1 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.620323181s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:07 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 55 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55 pruub=13.901856422s) [1] r=-1 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 120.621246338s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:07 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 55 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55 pruub=13.901811600s) [1] r=-1 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.621246338s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:07 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 55 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55 pruub=13.901603699s) [1] r=-1 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 120.621353149s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:07 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 55 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55 pruub=13.901576042s) [1] r=-1 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.621353149s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:07 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 55 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55 pruub=13.901453972s) [1] r=-1 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 120.621376038s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:07 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 55 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55 pruub=13.901418686s) [1] r=-1 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.621376038s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:07 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.a( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:07 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.e( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:07 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.6( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:07 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.2( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:08 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 2.19 deep-scrub starts
Nov 26 11:41:08 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 2.19 deep-scrub ok
Nov 26 11:41:08 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Nov 26 11:41:08 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Nov 26 11:41:08 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Nov 26 11:41:08 compute-0 ceph-mon[74928]: pgmap v118: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 2.8 KiB/s wr, 92 op/s; 800 B/s, 1 keys/s, 18 objects/s recovering
Nov 26 11:41:08 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 56 pg[6.6( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=55/56 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=37'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:08 compute-0 ceph-mon[74928]: 5.3 scrub starts
Nov 26 11:41:08 compute-0 ceph-mon[74928]: 5.3 scrub ok
Nov 26 11:41:08 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 56 pg[6.e( v 37'39 lc 33'14 (0'0,37'39] local-lis/les=55/56 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:08 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 56 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=55/56 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:08 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 56 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=55/56 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:08 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 26 11:41:08 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 26 11:41:08 compute-0 ceph-mon[74928]: osdmap e55: 3 total, 3 up, 3 in
Nov 26 11:41:08 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 3.1d deep-scrub starts
Nov 26 11:41:08 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 3.1d deep-scrub ok
Nov 26 11:41:08 compute-0 sudo[103829]: pam_unix(sudo:session): session closed for user root
Nov 26 11:41:09 compute-0 sshd-session[103463]: Connection closed by 192.168.122.30 port 56534
Nov 26 11:41:09 compute-0 sshd-session[103460]: pam_unix(sshd:session): session closed for user zuul
Nov 26 11:41:09 compute-0 systemd[1]: session-33.scope: Deactivated successfully.
Nov 26 11:41:09 compute-0 systemd[1]: session-33.scope: Consumed 6.458s CPU time.
Nov 26 11:41:09 compute-0 systemd-logind[744]: Session 33 logged out. Waiting for processes to exit.
Nov 26 11:41:09 compute-0 systemd-logind[744]: Removed session 33.
Nov 26 11:41:09 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v121: 305 pgs: 1 active+recovering, 304 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 2.8 KiB/s wr, 92 op/s; 1/215 objects misplaced (0.465%); 737 B/s, 1 keys/s, 17 objects/s recovering
Nov 26 11:41:09 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} v 0) v1
Nov 26 11:41:09 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 26 11:41:09 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0) v1
Nov 26 11:41:09 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 26 11:41:09 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Nov 26 11:41:09 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 26 11:41:09 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 26 11:41:09 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Nov 26 11:41:09 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Nov 26 11:41:09 compute-0 ceph-mon[74928]: 2.19 deep-scrub starts
Nov 26 11:41:09 compute-0 ceph-mon[74928]: 2.19 deep-scrub ok
Nov 26 11:41:09 compute-0 ceph-mon[74928]: osdmap e56: 3 total, 3 up, 3 in
Nov 26 11:41:09 compute-0 ceph-mon[74928]: 3.1d deep-scrub starts
Nov 26 11:41:09 compute-0 ceph-mon[74928]: 3.1d deep-scrub ok
Nov 26 11:41:09 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 26 11:41:09 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 26 11:41:10 compute-0 ceph-mon[74928]: pgmap v121: 305 pgs: 1 active+recovering, 304 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 2.8 KiB/s wr, 92 op/s; 1/215 objects misplaced (0.465%); 737 B/s, 1 keys/s, 17 objects/s recovering
Nov 26 11:41:10 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 26 11:41:10 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 26 11:41:10 compute-0 ceph-mon[74928]: osdmap e57: 3 total, 3 up, 3 in
Nov 26 11:41:11 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v123: 305 pgs: 1 active+recovering, 304 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 2.8 KiB/s wr, 92 op/s; 1/215 objects misplaced (0.465%); 604 B/s, 15 objects/s recovering
Nov 26 11:41:11 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} v 0) v1
Nov 26 11:41:11 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 26 11:41:11 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0) v1
Nov 26 11:41:11 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 26 11:41:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:41:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:41:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:41:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:41:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:41:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:41:11 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e57 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:41:11 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Nov 26 11:41:11 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 26 11:41:11 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 26 11:41:11 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Nov 26 11:41:11 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Nov 26 11:41:11 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58 pruub=9.882452011s) [1] r=-1 lpr=58 pi=[43,58)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 120.620277405s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:11 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58 pruub=9.882416725s) [1] r=-1 lpr=58 pi=[43,58)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.620277405s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:11 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58 pruub=9.883186340s) [1] r=-1 lpr=58 pi=[43,58)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 120.621376038s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:11 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58 pruub=9.883127213s) [1] r=-1 lpr=58 pi=[43,58)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.621376038s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:11 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 26 11:41:11 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 26 11:41:11 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 58 pg[6.4( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[43,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:11 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 58 pg[6.c( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[43,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:11 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 57 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57 pruub=12.966836929s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 37'39 active pruub 120.224609375s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:11 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 58 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57 pruub=12.966792107s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 120.224609375s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:11 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 57 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/52/0 sis=57 pruub=12.966692924s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 37'39 active pruub 120.224693298s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:11 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.3( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:11 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 58 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/52/0 sis=57 pruub=12.966661453s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 120.224693298s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:11 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 57 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57 pruub=12.966414452s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 37'39 active pruub 120.224662781s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:11 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 58 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57 pruub=12.966257095s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 120.224662781s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:11 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 57 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57 pruub=12.966341019s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 37'39 active pruub 120.224655151s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:11 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/52/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:11 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 58 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57 pruub=12.966003418s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 120.224655151s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:11 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:11 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.7( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:12 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Nov 26 11:41:12 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Nov 26 11:41:12 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Nov 26 11:41:12 compute-0 ceph-mon[74928]: pgmap v123: 305 pgs: 1 active+recovering, 304 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 2.8 KiB/s wr, 92 op/s; 1/215 objects misplaced (0.465%); 604 B/s, 15 objects/s recovering
Nov 26 11:41:12 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 26 11:41:12 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 26 11:41:12 compute-0 ceph-mon[74928]: osdmap e58: 3 total, 3 up, 3 in
Nov 26 11:41:12 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Nov 26 11:41:12 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Nov 26 11:41:12 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.c( v 37'39 lc 33'13 (0'0,37'39] local-lis/les=58/59 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[43,58)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:12 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.4( v 37'39 lc 33'11 (0'0,37'39] local-lis/les=58/59 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[43,58)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:12 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.f( v 37'39 lc 33'1 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=50/50 les/c/f=51/52/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:12 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.7( v 37'39 lc 33'15 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:12 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=57/59 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:12 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:13 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v126: 305 pgs: 1 active+recovering, 304 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1/215 objects misplaced (0.465%); 0 B/s, 0 objects/s recovering
Nov 26 11:41:13 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} v 0) v1
Nov 26 11:41:13 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 26 11:41:13 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0) v1
Nov 26 11:41:13 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 26 11:41:13 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Nov 26 11:41:13 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 26 11:41:13 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 26 11:41:13 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Nov 26 11:41:13 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Nov 26 11:41:13 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 60 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=10.959738731s) [0] r=-1 lpr=60 pi=[50,60)/1 crt=37'39 mlcod 37'39 active pruub 120.224670410s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:13 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 60 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=10.959684372s) [0] r=-1 lpr=60 pi=[50,60)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 120.224670410s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:13 compute-0 ceph-mon[74928]: 5.1e scrub starts
Nov 26 11:41:13 compute-0 ceph-mon[74928]: 5.1e scrub ok
Nov 26 11:41:13 compute-0 ceph-mon[74928]: osdmap e59: 3 total, 3 up, 3 in
Nov 26 11:41:13 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 26 11:41:13 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 26 11:41:13 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 60 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=10.959253311s) [0] r=-1 lpr=60 pi=[50,60)/1 crt=37'39 mlcod 37'39 active pruub 120.224632263s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:13 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 60 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=10.959200859s) [0] r=-1 lpr=60 pi=[50,60)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 120.224632263s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:13 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 60 pg[6.5( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:13 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 60 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:13 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Nov 26 11:41:13 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Nov 26 11:41:14 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Nov 26 11:41:14 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Nov 26 11:41:14 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Nov 26 11:41:14 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Nov 26 11:41:14 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Nov 26 11:41:14 compute-0 ceph-mon[74928]: pgmap v126: 305 pgs: 1 active+recovering, 304 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1/215 objects misplaced (0.465%); 0 B/s, 0 objects/s recovering
Nov 26 11:41:14 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 26 11:41:14 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 26 11:41:14 compute-0 ceph-mon[74928]: osdmap e60: 3 total, 3 up, 3 in
Nov 26 11:41:14 compute-0 ceph-mon[74928]: 3.1e scrub starts
Nov 26 11:41:14 compute-0 ceph-mon[74928]: 3.1e scrub ok
Nov 26 11:41:14 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 61 pg[6.5( v 37'39 lc 33'6 (0'0,37'39] local-lis/les=60/61 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:14 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 61 pg[6.d( v 37'39 lc 33'10 (0'0,37'39] local-lis/les=60/61 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:15 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Nov 26 11:41:15 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Nov 26 11:41:15 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v129: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 390 B/s, 1 objects/s recovering
Nov 26 11:41:15 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} v 0) v1
Nov 26 11:41:15 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 26 11:41:15 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0) v1
Nov 26 11:41:15 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 26 11:41:15 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Nov 26 11:41:15 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 26 11:41:15 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 26 11:41:15 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Nov 26 11:41:15 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Nov 26 11:41:15 compute-0 ceph-mon[74928]: 2.18 scrub starts
Nov 26 11:41:15 compute-0 ceph-mon[74928]: 2.18 scrub ok
Nov 26 11:41:15 compute-0 ceph-mon[74928]: osdmap e61: 3 total, 3 up, 3 in
Nov 26 11:41:15 compute-0 ceph-mon[74928]: 2.4 scrub starts
Nov 26 11:41:15 compute-0 ceph-mon[74928]: 2.4 scrub ok
Nov 26 11:41:15 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 26 11:41:15 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 26 11:41:16 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 5.f scrub starts
Nov 26 11:41:16 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 5.f scrub ok
Nov 26 11:41:16 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:41:16 compute-0 sudo[103888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:41:16 compute-0 sudo[103888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:41:16 compute-0 sudo[103888]: pam_unix(sudo:session): session closed for user root
Nov 26 11:41:16 compute-0 sudo[103913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:41:16 compute-0 sudo[103913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:41:16 compute-0 sudo[103913]: pam_unix(sudo:session): session closed for user root
Nov 26 11:41:16 compute-0 sudo[103938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:41:16 compute-0 sudo[103938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:41:16 compute-0 sudo[103938]: pam_unix(sudo:session): session closed for user root
Nov 26 11:41:16 compute-0 sudo[103963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 26 11:41:16 compute-0 sudo[103963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:41:16 compute-0 ceph-mon[74928]: pgmap v129: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 390 B/s, 1 objects/s recovering
Nov 26 11:41:16 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 26 11:41:16 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 26 11:41:16 compute-0 ceph-mon[74928]: osdmap e62: 3 total, 3 up, 3 in
Nov 26 11:41:16 compute-0 ceph-mon[74928]: 5.f scrub starts
Nov 26 11:41:16 compute-0 ceph-mon[74928]: 5.f scrub ok
Nov 26 11:41:17 compute-0 sudo[103963]: pam_unix(sudo:session): session closed for user root
Nov 26 11:41:17 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 11:41:17 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:41:17 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 11:41:17 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 11:41:17 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 11:41:17 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:41:17 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 594c4559-f985-432f-b237-4ad721f9ac9b does not exist
Nov 26 11:41:17 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev a3f7e175-3347-4d35-8920-07ec19e46efc does not exist
Nov 26 11:41:17 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev e48fbd5a-9ba5-4cd8-88da-baa945d715b7 does not exist
Nov 26 11:41:17 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 11:41:17 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 11:41:17 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 11:41:17 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 11:41:17 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 11:41:17 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:41:17 compute-0 sudo[104017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:41:17 compute-0 sudo[104017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:41:17 compute-0 sudo[104017]: pam_unix(sudo:session): session closed for user root
Nov 26 11:41:17 compute-0 sudo[104042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:41:17 compute-0 sudo[104042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:41:17 compute-0 sudo[104042]: pam_unix(sudo:session): session closed for user root
Nov 26 11:41:17 compute-0 sudo[104067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:41:17 compute-0 sudo[104067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:41:17 compute-0 sudo[104067]: pam_unix(sudo:session): session closed for user root
Nov 26 11:41:17 compute-0 sudo[104092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 26 11:41:17 compute-0 sudo[104092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:41:17 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v131: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 507 B/s, 2 keys/s, 3 objects/s recovering
Nov 26 11:41:17 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} v 0) v1
Nov 26 11:41:17 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 26 11:41:17 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0) v1
Nov 26 11:41:17 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 26 11:41:17 compute-0 podman[104149]: 2025-11-26 11:41:17.44793926 +0000 UTC m=+0.026651136 container create edd2d75b7c18fc053ed6d37223d5344fc429c687616347d6cb5c17c3d5e2e62f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 26 11:41:17 compute-0 systemd[1]: Started libpod-conmon-edd2d75b7c18fc053ed6d37223d5344fc429c687616347d6cb5c17c3d5e2e62f.scope.
Nov 26 11:41:17 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:41:17 compute-0 podman[104149]: 2025-11-26 11:41:17.502562395 +0000 UTC m=+0.081274281 container init edd2d75b7c18fc053ed6d37223d5344fc429c687616347d6cb5c17c3d5e2e62f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_cori, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:41:17 compute-0 podman[104149]: 2025-11-26 11:41:17.507139833 +0000 UTC m=+0.085851698 container start edd2d75b7c18fc053ed6d37223d5344fc429c687616347d6cb5c17c3d5e2e62f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_cori, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 26 11:41:17 compute-0 podman[104149]: 2025-11-26 11:41:17.508194996 +0000 UTC m=+0.086906862 container attach edd2d75b7c18fc053ed6d37223d5344fc429c687616347d6cb5c17c3d5e2e62f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_cori, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:41:17 compute-0 jolly_cori[104162]: 167 167
Nov 26 11:41:17 compute-0 systemd[1]: libpod-edd2d75b7c18fc053ed6d37223d5344fc429c687616347d6cb5c17c3d5e2e62f.scope: Deactivated successfully.
Nov 26 11:41:17 compute-0 podman[104149]: 2025-11-26 11:41:17.51141569 +0000 UTC m=+0.090127566 container died edd2d75b7c18fc053ed6d37223d5344fc429c687616347d6cb5c17c3d5e2e62f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_cori, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 26 11:41:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-79045abf041602aaafd555fe87140167ab9e0fb41affbe7fea21a904d2fc4e5d-merged.mount: Deactivated successfully.
Nov 26 11:41:17 compute-0 podman[104149]: 2025-11-26 11:41:17.532379832 +0000 UTC m=+0.111091697 container remove edd2d75b7c18fc053ed6d37223d5344fc429c687616347d6cb5c17c3d5e2e62f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_cori, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:41:17 compute-0 podman[104149]: 2025-11-26 11:41:17.43677458 +0000 UTC m=+0.015486467 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:41:17 compute-0 systemd[1]: libpod-conmon-edd2d75b7c18fc053ed6d37223d5344fc429c687616347d6cb5c17c3d5e2e62f.scope: Deactivated successfully.
Nov 26 11:41:17 compute-0 podman[104183]: 2025-11-26 11:41:17.641234852 +0000 UTC m=+0.025682155 container create 9e8402607da1d60be06ff48c4c064dc6d15264137f7cee505f26780a30861225 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_lamport, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:41:17 compute-0 systemd[1]: Started libpod-conmon-9e8402607da1d60be06ff48c4c064dc6d15264137f7cee505f26780a30861225.scope.
Nov 26 11:41:17 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:41:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57568e1f5356e41c8a8f8ec8092e2bdbd387fa630430ee0d956fbf1efb486ca1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:41:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57568e1f5356e41c8a8f8ec8092e2bdbd387fa630430ee0d956fbf1efb486ca1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:41:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57568e1f5356e41c8a8f8ec8092e2bdbd387fa630430ee0d956fbf1efb486ca1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:41:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57568e1f5356e41c8a8f8ec8092e2bdbd387fa630430ee0d956fbf1efb486ca1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:41:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57568e1f5356e41c8a8f8ec8092e2bdbd387fa630430ee0d956fbf1efb486ca1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 11:41:17 compute-0 podman[104183]: 2025-11-26 11:41:17.702406157 +0000 UTC m=+0.086853479 container init 9e8402607da1d60be06ff48c4c064dc6d15264137f7cee505f26780a30861225 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_lamport, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 26 11:41:17 compute-0 podman[104183]: 2025-11-26 11:41:17.708228366 +0000 UTC m=+0.092675669 container start 9e8402607da1d60be06ff48c4c064dc6d15264137f7cee505f26780a30861225 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_lamport, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 26 11:41:17 compute-0 podman[104183]: 2025-11-26 11:41:17.709534143 +0000 UTC m=+0.093981465 container attach 9e8402607da1d60be06ff48c4c064dc6d15264137f7cee505f26780a30861225 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 26 11:41:17 compute-0 podman[104183]: 2025-11-26 11:41:17.630573673 +0000 UTC m=+0.015020996 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:41:17 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:41:17 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 11:41:17 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:41:17 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 11:41:17 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 11:41:17 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:41:17 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 26 11:41:17 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 26 11:41:18 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Nov 26 11:41:18 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 26 11:41:18 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 26 11:41:18 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Nov 26 11:41:18 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Nov 26 11:41:18 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 63 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63 pruub=9.372240067s) [2] r=-1 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 active pruub 126.707878113s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:18 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 63 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63 pruub=9.372200966s) [2] r=-1 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 126.707878113s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:18 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 63 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63 pruub=9.372087479s) [2] r=-1 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 active pruub 126.708137512s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:18 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 63 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63 pruub=9.372042656s) [2] r=-1 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 126.708137512s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:18 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 63 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63 pruub=9.371743202s) [2] r=-1 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 active pruub 126.708282471s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:18 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 63 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63 pruub=9.371612549s) [2] r=-1 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 active pruub 126.708442688s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:18 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 63 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63 pruub=9.371588707s) [2] r=-1 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 126.708442688s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:18 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 63 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63 pruub=9.371382713s) [2] r=-1 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 126.708282471s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:18 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 62 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62 pruub=12.291723251s) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 126.142326355s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:18 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:18 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:18 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:18 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 63 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62 pruub=12.288963318s) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.142326355s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:18 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:18 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:18 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 62 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62 pruub=12.287503242s) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 126.142097473s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:18 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 63 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62 pruub=12.287465096s) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.142097473s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:18 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:18 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 62 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62 pruub=12.287308693s) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 126.142211914s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:18 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 63 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62 pruub=12.287252426s) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.142211914s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:18 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:18 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 62 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62 pruub=12.286839485s) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 126.142440796s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:18 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 63 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62 pruub=12.286744118s) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.142440796s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:18 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:18 compute-0 pensive_lamport[104196]: --> passed data devices: 0 physical, 3 LVM
Nov 26 11:41:18 compute-0 pensive_lamport[104196]: --> relative data size: 1.0
Nov 26 11:41:18 compute-0 pensive_lamport[104196]: --> All data devices are unavailable
Nov 26 11:41:18 compute-0 systemd[1]: libpod-9e8402607da1d60be06ff48c4c064dc6d15264137f7cee505f26780a30861225.scope: Deactivated successfully.
Nov 26 11:41:18 compute-0 conmon[104196]: conmon 9e8402607da1d60be06f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9e8402607da1d60be06ff48c4c064dc6d15264137f7cee505f26780a30861225.scope/container/memory.events
Nov 26 11:41:18 compute-0 podman[104183]: 2025-11-26 11:41:18.519152508 +0000 UTC m=+0.903599820 container died 9e8402607da1d60be06ff48c4c064dc6d15264137f7cee505f26780a30861225 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_lamport, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:41:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-57568e1f5356e41c8a8f8ec8092e2bdbd387fa630430ee0d956fbf1efb486ca1-merged.mount: Deactivated successfully.
Nov 26 11:41:18 compute-0 podman[104183]: 2025-11-26 11:41:18.546866367 +0000 UTC m=+0.931313670 container remove 9e8402607da1d60be06ff48c4c064dc6d15264137f7cee505f26780a30861225 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_lamport, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:41:18 compute-0 systemd[1]: libpod-conmon-9e8402607da1d60be06ff48c4c064dc6d15264137f7cee505f26780a30861225.scope: Deactivated successfully.
Nov 26 11:41:18 compute-0 sudo[104092]: pam_unix(sudo:session): session closed for user root
Nov 26 11:41:18 compute-0 sudo[104235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:41:18 compute-0 sudo[104235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:41:18 compute-0 sudo[104235]: pam_unix(sudo:session): session closed for user root
Nov 26 11:41:18 compute-0 sudo[104260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:41:18 compute-0 sudo[104260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:41:18 compute-0 sudo[104260]: pam_unix(sudo:session): session closed for user root
Nov 26 11:41:18 compute-0 sudo[104285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:41:18 compute-0 sudo[104285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:41:18 compute-0 sudo[104285]: pam_unix(sudo:session): session closed for user root
Nov 26 11:41:18 compute-0 sudo[104310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -- lvm list --format json
Nov 26 11:41:18 compute-0 sudo[104310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:41:18 compute-0 ceph-mon[74928]: pgmap v131: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 507 B/s, 2 keys/s, 3 objects/s recovering
Nov 26 11:41:18 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 26 11:41:18 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 26 11:41:18 compute-0 ceph-mon[74928]: osdmap e63: 3 total, 3 up, 3 in
Nov 26 11:41:18 compute-0 podman[104366]: 2025-11-26 11:41:18.956963873 +0000 UTC m=+0.026287510 container create b0fd4c758092c8f85b58c4351e99e2ea5ea93eda371673d464d936d85353e767 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_einstein, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 26 11:41:18 compute-0 systemd[1]: Started libpod-conmon-b0fd4c758092c8f85b58c4351e99e2ea5ea93eda371673d464d936d85353e767.scope.
Nov 26 11:41:18 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:41:19 compute-0 podman[104366]: 2025-11-26 11:41:19.00405052 +0000 UTC m=+0.073374157 container init b0fd4c758092c8f85b58c4351e99e2ea5ea93eda371673d464d936d85353e767 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_einstein, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:41:19 compute-0 podman[104366]: 2025-11-26 11:41:19.008079381 +0000 UTC m=+0.077403018 container start b0fd4c758092c8f85b58c4351e99e2ea5ea93eda371673d464d936d85353e767 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_einstein, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 26 11:41:19 compute-0 podman[104366]: 2025-11-26 11:41:19.0092332 +0000 UTC m=+0.078556837 container attach b0fd4c758092c8f85b58c4351e99e2ea5ea93eda371673d464d936d85353e767 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 26 11:41:19 compute-0 boring_einstein[104379]: 167 167
Nov 26 11:41:19 compute-0 systemd[1]: libpod-b0fd4c758092c8f85b58c4351e99e2ea5ea93eda371673d464d936d85353e767.scope: Deactivated successfully.
Nov 26 11:41:19 compute-0 conmon[104379]: conmon b0fd4c758092c8f85b58 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b0fd4c758092c8f85b58c4351e99e2ea5ea93eda371673d464d936d85353e767.scope/container/memory.events
Nov 26 11:41:19 compute-0 podman[104366]: 2025-11-26 11:41:19.012300852 +0000 UTC m=+0.081624488 container died b0fd4c758092c8f85b58c4351e99e2ea5ea93eda371673d464d936d85353e767 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_einstein, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:41:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-447032de5c58d2d3929a0a453bfd8d33100dfbef405163caccc9118233fa1d9b-merged.mount: Deactivated successfully.
Nov 26 11:41:19 compute-0 podman[104366]: 2025-11-26 11:41:19.030217805 +0000 UTC m=+0.099541442 container remove b0fd4c758092c8f85b58c4351e99e2ea5ea93eda371673d464d936d85353e767 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_einstein, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 26 11:41:19 compute-0 podman[104366]: 2025-11-26 11:41:18.946470524 +0000 UTC m=+0.015794181 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:41:19 compute-0 systemd[1]: libpod-conmon-b0fd4c758092c8f85b58c4351e99e2ea5ea93eda371673d464d936d85353e767.scope: Deactivated successfully.
Nov 26 11:41:19 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Nov 26 11:41:19 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Nov 26 11:41:19 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Nov 26 11:41:19 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:19 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:19 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:19 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:19 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:19 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:19 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:19 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:19 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:19 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:19 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:19 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:19 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:19 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:19 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:19 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:19 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:19 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:19 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:19 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:19 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:19 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:19 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:19 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:19 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:19 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:19 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:19 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:19 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:19 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:19 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:19 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:19 compute-0 podman[104400]: 2025-11-26 11:41:19.143971515 +0000 UTC m=+0.026204074 container create cdc4bcfb6beeb13744b2d604afa48b59302da66bff1e4796afff46e1e10d89d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:41:19 compute-0 systemd[1]: Started libpod-conmon-cdc4bcfb6beeb13744b2d604afa48b59302da66bff1e4796afff46e1e10d89d3.scope.
Nov 26 11:41:19 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:41:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a341cabc266e027d83c1b4d242ed5c494c07c821e8e560159eccef1d553ee33/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:41:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a341cabc266e027d83c1b4d242ed5c494c07c821e8e560159eccef1d553ee33/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:41:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a341cabc266e027d83c1b4d242ed5c494c07c821e8e560159eccef1d553ee33/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:41:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a341cabc266e027d83c1b4d242ed5c494c07c821e8e560159eccef1d553ee33/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:41:19 compute-0 podman[104400]: 2025-11-26 11:41:19.209464672 +0000 UTC m=+0.091697232 container init cdc4bcfb6beeb13744b2d604afa48b59302da66bff1e4796afff46e1e10d89d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 26 11:41:19 compute-0 podman[104400]: 2025-11-26 11:41:19.214819494 +0000 UTC m=+0.097052053 container start cdc4bcfb6beeb13744b2d604afa48b59302da66bff1e4796afff46e1e10d89d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_margulis, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 26 11:41:19 compute-0 podman[104400]: 2025-11-26 11:41:19.215924883 +0000 UTC m=+0.098157442 container attach cdc4bcfb6beeb13744b2d604afa48b59302da66bff1e4796afff46e1e10d89d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_margulis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:41:19 compute-0 podman[104400]: 2025-11-26 11:41:19.133773509 +0000 UTC m=+0.016006088 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:41:19 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v134: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 171 B/s, 2 keys/s, 2 objects/s recovering
Nov 26 11:41:19 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} v 0) v1
Nov 26 11:41:19 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 26 11:41:19 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0) v1
Nov 26 11:41:19 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 26 11:41:19 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Nov 26 11:41:19 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Nov 26 11:41:19 compute-0 modest_margulis[104414]: {
Nov 26 11:41:19 compute-0 modest_margulis[104414]:     "0": [
Nov 26 11:41:19 compute-0 modest_margulis[104414]:         {
Nov 26 11:41:19 compute-0 modest_margulis[104414]:             "devices": [
Nov 26 11:41:19 compute-0 modest_margulis[104414]:                 "/dev/loop3"
Nov 26 11:41:19 compute-0 modest_margulis[104414]:             ],
Nov 26 11:41:19 compute-0 modest_margulis[104414]:             "lv_name": "ceph_lv0",
Nov 26 11:41:19 compute-0 modest_margulis[104414]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:41:19 compute-0 modest_margulis[104414]:             "lv_size": "21470642176",
Nov 26 11:41:19 compute-0 modest_margulis[104414]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a9ad59a0-aa2e-4d92-b571-519d2d145b6a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:41:19 compute-0 modest_margulis[104414]:             "lv_uuid": "Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ",
Nov 26 11:41:19 compute-0 modest_margulis[104414]:             "name": "ceph_lv0",
Nov 26 11:41:19 compute-0 modest_margulis[104414]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:41:19 compute-0 modest_margulis[104414]:             "tags": {
Nov 26 11:41:19 compute-0 modest_margulis[104414]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:41:19 compute-0 modest_margulis[104414]:                 "ceph.block_uuid": "Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ",
Nov 26 11:41:19 compute-0 modest_margulis[104414]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:41:19 compute-0 modest_margulis[104414]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:41:19 compute-0 modest_margulis[104414]:                 "ceph.cluster_name": "ceph",
Nov 26 11:41:19 compute-0 modest_margulis[104414]:                 "ceph.crush_device_class": "",
Nov 26 11:41:19 compute-0 modest_margulis[104414]:                 "ceph.encrypted": "0",
Nov 26 11:41:19 compute-0 modest_margulis[104414]:                 "ceph.osd_fsid": "a9ad59a0-aa2e-4d92-b571-519d2d145b6a",
Nov 26 11:41:19 compute-0 modest_margulis[104414]:                 "ceph.osd_id": "0",
Nov 26 11:41:19 compute-0 modest_margulis[104414]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:41:19 compute-0 modest_margulis[104414]:                 "ceph.type": "block",
Nov 26 11:41:19 compute-0 modest_margulis[104414]:                 "ceph.vdo": "0"
Nov 26 11:41:19 compute-0 modest_margulis[104414]:             },
Nov 26 11:41:19 compute-0 modest_margulis[104414]:             "type": "block",
Nov 26 11:41:19 compute-0 modest_margulis[104414]:             "vg_name": "ceph_vg0"
Nov 26 11:41:19 compute-0 modest_margulis[104414]:         }
Nov 26 11:41:19 compute-0 modest_margulis[104414]:     ],
Nov 26 11:41:19 compute-0 modest_margulis[104414]:     "1": [
Nov 26 11:41:19 compute-0 modest_margulis[104414]:         {
Nov 26 11:41:19 compute-0 modest_margulis[104414]:             "devices": [
Nov 26 11:41:19 compute-0 modest_margulis[104414]:                 "/dev/loop4"
Nov 26 11:41:19 compute-0 modest_margulis[104414]:             ],
Nov 26 11:41:19 compute-0 modest_margulis[104414]:             "lv_name": "ceph_lv1",
Nov 26 11:41:19 compute-0 modest_margulis[104414]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:41:19 compute-0 modest_margulis[104414]:             "lv_size": "21470642176",
Nov 26 11:41:19 compute-0 modest_margulis[104414]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2627095b-eef8-4027-bfef-68bf7cb6801f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:41:19 compute-0 modest_margulis[104414]:             "lv_uuid": "T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS",
Nov 26 11:41:19 compute-0 modest_margulis[104414]:             "name": "ceph_lv1",
Nov 26 11:41:19 compute-0 modest_margulis[104414]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:41:19 compute-0 modest_margulis[104414]:             "tags": {
Nov 26 11:41:19 compute-0 modest_margulis[104414]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:41:19 compute-0 modest_margulis[104414]:                 "ceph.block_uuid": "T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS",
Nov 26 11:41:19 compute-0 modest_margulis[104414]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:41:19 compute-0 modest_margulis[104414]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:41:19 compute-0 modest_margulis[104414]:                 "ceph.cluster_name": "ceph",
Nov 26 11:41:19 compute-0 modest_margulis[104414]:                 "ceph.crush_device_class": "",
Nov 26 11:41:19 compute-0 modest_margulis[104414]:                 "ceph.encrypted": "0",
Nov 26 11:41:19 compute-0 modest_margulis[104414]:                 "ceph.osd_fsid": "2627095b-eef8-4027-bfef-68bf7cb6801f",
Nov 26 11:41:19 compute-0 modest_margulis[104414]:                 "ceph.osd_id": "1",
Nov 26 11:41:19 compute-0 modest_margulis[104414]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:41:19 compute-0 modest_margulis[104414]:                 "ceph.type": "block",
Nov 26 11:41:19 compute-0 modest_margulis[104414]:                 "ceph.vdo": "0"
Nov 26 11:41:19 compute-0 modest_margulis[104414]:             },
Nov 26 11:41:19 compute-0 modest_margulis[104414]:             "type": "block",
Nov 26 11:41:19 compute-0 modest_margulis[104414]:             "vg_name": "ceph_vg1"
Nov 26 11:41:19 compute-0 modest_margulis[104414]:         }
Nov 26 11:41:19 compute-0 modest_margulis[104414]:     ],
Nov 26 11:41:19 compute-0 modest_margulis[104414]:     "2": [
Nov 26 11:41:19 compute-0 modest_margulis[104414]:         {
Nov 26 11:41:19 compute-0 modest_margulis[104414]:             "devices": [
Nov 26 11:41:19 compute-0 modest_margulis[104414]:                 "/dev/loop5"
Nov 26 11:41:19 compute-0 modest_margulis[104414]:             ],
Nov 26 11:41:19 compute-0 modest_margulis[104414]:             "lv_name": "ceph_lv2",
Nov 26 11:41:19 compute-0 modest_margulis[104414]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:41:19 compute-0 modest_margulis[104414]:             "lv_size": "21470642176",
Nov 26 11:41:19 compute-0 modest_margulis[104414]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d56156fb-7361-4bef-b06b-1320109b4323,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:41:19 compute-0 modest_margulis[104414]:             "lv_uuid": "PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF",
Nov 26 11:41:19 compute-0 modest_margulis[104414]:             "name": "ceph_lv2",
Nov 26 11:41:19 compute-0 modest_margulis[104414]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:41:19 compute-0 modest_margulis[104414]:             "tags": {
Nov 26 11:41:19 compute-0 modest_margulis[104414]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:41:19 compute-0 modest_margulis[104414]:                 "ceph.block_uuid": "PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF",
Nov 26 11:41:19 compute-0 modest_margulis[104414]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:41:19 compute-0 modest_margulis[104414]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:41:19 compute-0 modest_margulis[104414]:                 "ceph.cluster_name": "ceph",
Nov 26 11:41:19 compute-0 modest_margulis[104414]:                 "ceph.crush_device_class": "",
Nov 26 11:41:19 compute-0 modest_margulis[104414]:                 "ceph.encrypted": "0",
Nov 26 11:41:19 compute-0 modest_margulis[104414]:                 "ceph.osd_fsid": "d56156fb-7361-4bef-b06b-1320109b4323",
Nov 26 11:41:19 compute-0 modest_margulis[104414]:                 "ceph.osd_id": "2",
Nov 26 11:41:19 compute-0 modest_margulis[104414]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:41:19 compute-0 modest_margulis[104414]:                 "ceph.type": "block",
Nov 26 11:41:19 compute-0 modest_margulis[104414]:                 "ceph.vdo": "0"
Nov 26 11:41:19 compute-0 modest_margulis[104414]:             },
Nov 26 11:41:19 compute-0 modest_margulis[104414]:             "type": "block",
Nov 26 11:41:19 compute-0 modest_margulis[104414]:             "vg_name": "ceph_vg2"
Nov 26 11:41:19 compute-0 modest_margulis[104414]:         }
Nov 26 11:41:19 compute-0 modest_margulis[104414]:     ]
Nov 26 11:41:19 compute-0 modest_margulis[104414]: }
Nov 26 11:41:19 compute-0 systemd[1]: libpod-cdc4bcfb6beeb13744b2d604afa48b59302da66bff1e4796afff46e1e10d89d3.scope: Deactivated successfully.
Nov 26 11:41:19 compute-0 podman[104423]: 2025-11-26 11:41:19.862205307 +0000 UTC m=+0.016825752 container died cdc4bcfb6beeb13744b2d604afa48b59302da66bff1e4796afff46e1e10d89d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_margulis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:41:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a341cabc266e027d83c1b4d242ed5c494c07c821e8e560159eccef1d553ee33-merged.mount: Deactivated successfully.
Nov 26 11:41:19 compute-0 podman[104423]: 2025-11-26 11:41:19.889495753 +0000 UTC m=+0.044116188 container remove cdc4bcfb6beeb13744b2d604afa48b59302da66bff1e4796afff46e1e10d89d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_margulis, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:41:19 compute-0 systemd[1]: libpod-conmon-cdc4bcfb6beeb13744b2d604afa48b59302da66bff1e4796afff46e1e10d89d3.scope: Deactivated successfully.
Nov 26 11:41:19 compute-0 sudo[104310]: pam_unix(sudo:session): session closed for user root
Nov 26 11:41:19 compute-0 sudo[104434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:41:19 compute-0 sudo[104434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:41:19 compute-0 sudo[104434]: pam_unix(sudo:session): session closed for user root
Nov 26 11:41:20 compute-0 sudo[104459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:41:20 compute-0 sudo[104459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:41:20 compute-0 sudo[104459]: pam_unix(sudo:session): session closed for user root
Nov 26 11:41:20 compute-0 sudo[104484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:41:20 compute-0 sudo[104484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:41:20 compute-0 sudo[104484]: pam_unix(sudo:session): session closed for user root
Nov 26 11:41:20 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Nov 26 11:41:20 compute-0 sudo[104509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -- raw list --format json
Nov 26 11:41:20 compute-0 sudo[104509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:41:20 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 26 11:41:20 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 26 11:41:20 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Nov 26 11:41:20 compute-0 ceph-mon[74928]: osdmap e64: 3 total, 3 up, 3 in
Nov 26 11:41:20 compute-0 ceph-mon[74928]: pgmap v134: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 171 B/s, 2 keys/s, 2 objects/s recovering
Nov 26 11:41:20 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 26 11:41:20 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 26 11:41:20 compute-0 ceph-mon[74928]: 5.2 scrub starts
Nov 26 11:41:20 compute-0 ceph-mon[74928]: 5.2 scrub ok
Nov 26 11:41:20 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Nov 26 11:41:20 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=65 pruub=10.533356667s) [2] r=-1 lpr=65 pi=[45,65)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 126.142112732s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:20 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=65 pruub=10.533308983s) [2] r=-1 lpr=65 pi=[45,65)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.142112732s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:20 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=65) [2] r=0 lpr=65 pi=[45,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:20 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=65 pruub=10.532989502s) [2] r=-1 lpr=65 pi=[45,65)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 126.142852783s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:20 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=65 pruub=10.532886505s) [2] r=-1 lpr=65 pi=[45,65)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.142852783s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:20 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=65) [2] r=0 lpr=65 pi=[45,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:20 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:20 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 65 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=65 pruub=9.525791168s) [2] r=-1 lpr=65 pi=[43,65)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 128.621459961s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:20 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 65 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=65 pruub=9.525757790s) [2] r=-1 lpr=65 pi=[43,65)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.621459961s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:20 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[6.8( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=65) [2] r=0 lpr=65 pi=[43,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:20 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:20 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:20 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:20 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:20 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:20 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:20 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:20 compute-0 podman[104564]: 2025-11-26 11:41:20.345154661 +0000 UTC m=+0.027776298 container create c8a3dc6d8dca30ca85526a837a3eb4c871b884ac02fd29ec9ce89dcaa4f9878a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:41:20 compute-0 systemd[1]: Started libpod-conmon-c8a3dc6d8dca30ca85526a837a3eb4c871b884ac02fd29ec9ce89dcaa4f9878a.scope.
Nov 26 11:41:20 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:41:20 compute-0 podman[104564]: 2025-11-26 11:41:20.392143355 +0000 UTC m=+0.074764982 container init c8a3dc6d8dca30ca85526a837a3eb4c871b884ac02fd29ec9ce89dcaa4f9878a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:41:20 compute-0 podman[104564]: 2025-11-26 11:41:20.396615575 +0000 UTC m=+0.079237202 container start c8a3dc6d8dca30ca85526a837a3eb4c871b884ac02fd29ec9ce89dcaa4f9878a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:41:20 compute-0 podman[104564]: 2025-11-26 11:41:20.397655541 +0000 UTC m=+0.080277168 container attach c8a3dc6d8dca30ca85526a837a3eb4c871b884ac02fd29ec9ce89dcaa4f9878a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_perlman, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 26 11:41:20 compute-0 happy_perlman[104578]: 167 167
Nov 26 11:41:20 compute-0 systemd[1]: libpod-c8a3dc6d8dca30ca85526a837a3eb4c871b884ac02fd29ec9ce89dcaa4f9878a.scope: Deactivated successfully.
Nov 26 11:41:20 compute-0 conmon[104578]: conmon c8a3dc6d8dca30ca8552 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c8a3dc6d8dca30ca85526a837a3eb4c871b884ac02fd29ec9ce89dcaa4f9878a.scope/container/memory.events
Nov 26 11:41:20 compute-0 podman[104564]: 2025-11-26 11:41:20.401044273 +0000 UTC m=+0.083665900 container died c8a3dc6d8dca30ca85526a837a3eb4c871b884ac02fd29ec9ce89dcaa4f9878a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_perlman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:41:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c86c48ca2acbbb9d11bf5056aed455cf8e45abcb4f99c028d7f918c6d8a1947-merged.mount: Deactivated successfully.
Nov 26 11:41:20 compute-0 podman[104564]: 2025-11-26 11:41:20.421593313 +0000 UTC m=+0.104214940 container remove c8a3dc6d8dca30ca85526a837a3eb4c871b884ac02fd29ec9ce89dcaa4f9878a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_perlman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 26 11:41:20 compute-0 podman[104564]: 2025-11-26 11:41:20.334066689 +0000 UTC m=+0.016688336 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:41:20 compute-0 systemd[1]: libpod-conmon-c8a3dc6d8dca30ca85526a837a3eb4c871b884ac02fd29ec9ce89dcaa4f9878a.scope: Deactivated successfully.
Nov 26 11:41:20 compute-0 podman[104600]: 2025-11-26 11:41:20.531516264 +0000 UTC m=+0.027145706 container create e01fdd2cf33662d217a92377366c6f2ca2828138aef1463394241e00710aca55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 26 11:41:20 compute-0 systemd[1]: Started libpod-conmon-e01fdd2cf33662d217a92377366c6f2ca2828138aef1463394241e00710aca55.scope.
Nov 26 11:41:20 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:41:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04ca68c0c5f5f358c91679bbc3b3521eeddcc468c3383ca9f55e906c5a7c6da5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:41:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04ca68c0c5f5f358c91679bbc3b3521eeddcc468c3383ca9f55e906c5a7c6da5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:41:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04ca68c0c5f5f358c91679bbc3b3521eeddcc468c3383ca9f55e906c5a7c6da5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:41:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04ca68c0c5f5f358c91679bbc3b3521eeddcc468c3383ca9f55e906c5a7c6da5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:41:20 compute-0 podman[104600]: 2025-11-26 11:41:20.595362638 +0000 UTC m=+0.090992070 container init e01fdd2cf33662d217a92377366c6f2ca2828138aef1463394241e00710aca55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 26 11:41:20 compute-0 podman[104600]: 2025-11-26 11:41:20.602314871 +0000 UTC m=+0.097944302 container start e01fdd2cf33662d217a92377366c6f2ca2828138aef1463394241e00710aca55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mclean, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 11:41:20 compute-0 podman[104600]: 2025-11-26 11:41:20.60375725 +0000 UTC m=+0.099386702 container attach e01fdd2cf33662d217a92377366c6f2ca2828138aef1463394241e00710aca55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mclean, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:41:20 compute-0 podman[104600]: 2025-11-26 11:41:20.520141064 +0000 UTC m=+0.015770517 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:41:20 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Nov 26 11:41:20 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Nov 26 11:41:20 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 3.e scrub starts
Nov 26 11:41:20 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 3.e scrub ok
Nov 26 11:41:21 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Nov 26 11:41:21 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Nov 26 11:41:21 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Nov 26 11:41:21 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:21 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[45,66)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:21 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:21 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66 pruub=15.004311562s) [2] async=[2] r=-1 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 131.613037109s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:21 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66 pruub=15.004244804s) [2] r=-1 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 131.613037109s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:21 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=66) [2]/[1] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:21 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=66) [2]/[1] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:21 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66 pruub=15.001119614s) [2] async=[2] r=-1 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 131.610458374s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:21 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66 pruub=15.003470421s) [2] async=[2] r=-1 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 131.613113403s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:21 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66 pruub=15.003413200s) [2] r=-1 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 131.613113403s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:21 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66 pruub=15.003370285s) [2] async=[2] r=-1 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 131.613098145s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:21 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66 pruub=15.003334045s) [2] r=-1 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 131.613098145s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:21 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66 pruub=15.000521660s) [2] r=-1 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 131.610458374s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:21 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=66) [2]/[1] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:21 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[45,66)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:21 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=66) [2]/[1] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:21 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[45,66)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:21 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[45,66)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:21 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 26 11:41:21 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 26 11:41:21 compute-0 ceph-mon[74928]: osdmap e65: 3 total, 3 up, 3 in
Nov 26 11:41:21 compute-0 ceph-mon[74928]: 3.1f scrub starts
Nov 26 11:41:21 compute-0 ceph-mon[74928]: 3.1f scrub ok
Nov 26 11:41:21 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:21 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:21 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:21 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:21 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:21 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:21 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:21 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:21 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:21 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:21 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:21 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:21 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 66 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66 pruub=15.001863480s) [2] async=[2] r=-1 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 44'389 active pruub 135.102462769s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:21 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 66 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66 pruub=15.001706123s) [2] r=-1 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 135.102462769s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:21 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 66 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66 pruub=15.001188278s) [2] async=[2] r=-1 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 44'389 active pruub 135.102416992s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:21 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 66 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66 pruub=15.001043320s) [2] r=-1 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 135.102416992s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:21 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 66 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66 pruub=15.000925064s) [2] async=[2] r=-1 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 44'389 active pruub 135.102462769s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:21 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 66 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66 pruub=15.000885963s) [2] r=-1 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 135.102462769s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:21 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 66 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66 pruub=14.997325897s) [2] async=[2] r=-1 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 44'389 active pruub 135.098831177s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:21 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 66 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66 pruub=14.996876717s) [2] r=-1 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 135.098831177s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:21 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:21 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:21 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=65/66 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=65) [2] r=0 lpr=65 pi=[43,65)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:21 compute-0 determined_mclean[104613]: {
Nov 26 11:41:21 compute-0 determined_mclean[104613]:     "2627095b-eef8-4027-bfef-68bf7cb6801f": {
Nov 26 11:41:21 compute-0 determined_mclean[104613]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:41:21 compute-0 determined_mclean[104613]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 11:41:21 compute-0 determined_mclean[104613]:         "osd_id": 1,
Nov 26 11:41:21 compute-0 determined_mclean[104613]:         "osd_uuid": "2627095b-eef8-4027-bfef-68bf7cb6801f",
Nov 26 11:41:21 compute-0 determined_mclean[104613]:         "type": "bluestore"
Nov 26 11:41:21 compute-0 determined_mclean[104613]:     },
Nov 26 11:41:21 compute-0 determined_mclean[104613]:     "a9ad59a0-aa2e-4d92-b571-519d2d145b6a": {
Nov 26 11:41:21 compute-0 determined_mclean[104613]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:41:21 compute-0 determined_mclean[104613]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 11:41:21 compute-0 determined_mclean[104613]:         "osd_id": 0,
Nov 26 11:41:21 compute-0 determined_mclean[104613]:         "osd_uuid": "a9ad59a0-aa2e-4d92-b571-519d2d145b6a",
Nov 26 11:41:21 compute-0 determined_mclean[104613]:         "type": "bluestore"
Nov 26 11:41:21 compute-0 determined_mclean[104613]:     },
Nov 26 11:41:21 compute-0 determined_mclean[104613]:     "d56156fb-7361-4bef-b06b-1320109b4323": {
Nov 26 11:41:21 compute-0 determined_mclean[104613]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:41:21 compute-0 determined_mclean[104613]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 11:41:21 compute-0 determined_mclean[104613]:         "osd_id": 2,
Nov 26 11:41:21 compute-0 determined_mclean[104613]:         "osd_uuid": "d56156fb-7361-4bef-b06b-1320109b4323",
Nov 26 11:41:21 compute-0 determined_mclean[104613]:         "type": "bluestore"
Nov 26 11:41:21 compute-0 determined_mclean[104613]:     }
Nov 26 11:41:21 compute-0 determined_mclean[104613]: }
Nov 26 11:41:21 compute-0 systemd[1]: libpod-e01fdd2cf33662d217a92377366c6f2ca2828138aef1463394241e00710aca55.scope: Deactivated successfully.
Nov 26 11:41:21 compute-0 conmon[104613]: conmon e01fdd2cf33662d217a9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e01fdd2cf33662d217a92377366c6f2ca2828138aef1463394241e00710aca55.scope/container/memory.events
Nov 26 11:41:21 compute-0 podman[104600]: 2025-11-26 11:41:21.366451984 +0000 UTC m=+0.862081415 container died e01fdd2cf33662d217a92377366c6f2ca2828138aef1463394241e00710aca55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 26 11:41:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-04ca68c0c5f5f358c91679bbc3b3521eeddcc468c3383ca9f55e906c5a7c6da5-merged.mount: Deactivated successfully.
Nov 26 11:41:21 compute-0 podman[104600]: 2025-11-26 11:41:21.394482696 +0000 UTC m=+0.890112118 container remove e01fdd2cf33662d217a92377366c6f2ca2828138aef1463394241e00710aca55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 26 11:41:21 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v137: 305 pgs: 4 active+remapped, 301 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 137 B/s, 4 objects/s recovering
Nov 26 11:41:21 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} v 0) v1
Nov 26 11:41:21 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 26 11:41:21 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0) v1
Nov 26 11:41:21 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 26 11:41:21 compute-0 systemd[1]: libpod-conmon-e01fdd2cf33662d217a92377366c6f2ca2828138aef1463394241e00710aca55.scope: Deactivated successfully.
Nov 26 11:41:21 compute-0 sudo[104509]: pam_unix(sudo:session): session closed for user root
Nov 26 11:41:21 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 11:41:21 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:41:21 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 11:41:21 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:41:21 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 723e56ae-2192-4883-9462-f178941d7ef7 does not exist
Nov 26 11:41:21 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 027d57d9-c99d-4ebd-8ba8-7052b49bcd74 does not exist
Nov 26 11:41:21 compute-0 sudo[104656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:41:21 compute-0 sudo[104656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:41:21 compute-0 sudo[104656]: pam_unix(sudo:session): session closed for user root
Nov 26 11:41:21 compute-0 sudo[104681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 26 11:41:21 compute-0 sudo[104681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:41:21 compute-0 sudo[104681]: pam_unix(sudo:session): session closed for user root
Nov 26 11:41:21 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e66 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:41:21 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 3.11 deep-scrub starts
Nov 26 11:41:21 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 3.11 deep-scrub ok
Nov 26 11:41:22 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Nov 26 11:41:22 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 26 11:41:22 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 26 11:41:22 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Nov 26 11:41:22 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Nov 26 11:41:22 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=67 pruub=10.614984512s) [0] r=-1 lpr=67 pi=[50,67)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 128.224777222s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:22 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=67 pruub=10.614945412s) [0] r=-1 lpr=67 pi=[50,67)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.224777222s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:22 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 67 pg[6.9( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=67) [0] r=0 lpr=67 pi=[50,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:22 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:22 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:22 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=66/67 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:22 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=66/67 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:22 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:22 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=66/67 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:22 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=66/67 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:22 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=66/67 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:22 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:22 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:22 compute-0 ceph-mon[74928]: 3.e scrub starts
Nov 26 11:41:22 compute-0 ceph-mon[74928]: 3.e scrub ok
Nov 26 11:41:22 compute-0 ceph-mon[74928]: osdmap e66: 3 total, 3 up, 3 in
Nov 26 11:41:22 compute-0 ceph-mon[74928]: pgmap v137: 305 pgs: 4 active+remapped, 301 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 137 B/s, 4 objects/s recovering
Nov 26 11:41:22 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 26 11:41:22 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 26 11:41:22 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:41:22 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:41:22 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 26 11:41:22 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 26 11:41:22 compute-0 ceph-mon[74928]: osdmap e67: 3 total, 3 up, 3 in
Nov 26 11:41:22 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Nov 26 11:41:22 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Nov 26 11:41:23 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Nov 26 11:41:23 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Nov 26 11:41:23 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Nov 26 11:41:23 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 68 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=66/67 n=6 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68 pruub=14.997215271s) [2] async=[2] r=-1 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 133.622528076s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:23 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 68 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68 pruub=14.997069359s) [2] async=[2] r=-1 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 133.622497559s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:23 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 68 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68 pruub=14.997024536s) [2] r=-1 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.622497559s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:23 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 68 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=66/67 n=6 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68 pruub=14.996996880s) [2] r=-1 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.622528076s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:23 compute-0 ceph-mon[74928]: 3.11 deep-scrub starts
Nov 26 11:41:23 compute-0 ceph-mon[74928]: 3.11 deep-scrub ok
Nov 26 11:41:23 compute-0 ceph-mon[74928]: 3.1b scrub starts
Nov 26 11:41:23 compute-0 ceph-mon[74928]: 3.1b scrub ok
Nov 26 11:41:23 compute-0 ceph-mon[74928]: osdmap e68: 3 total, 3 up, 3 in
Nov 26 11:41:23 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 68 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=67/68 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=67) [0] r=0 lpr=67 pi=[50,67)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:23 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 68 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68) [2] r=0 lpr=68 pi=[45,68)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:23 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 68 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68) [2] r=0 lpr=68 pi=[45,68)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:23 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 68 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68) [2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:23 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 68 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68) [2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:23 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v140: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 192 B/s, 9 objects/s recovering
Nov 26 11:41:23 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} v 0) v1
Nov 26 11:41:23 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 26 11:41:23 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0) v1
Nov 26 11:41:23 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 26 11:41:23 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Nov 26 11:41:23 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Nov 26 11:41:24 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Nov 26 11:41:24 compute-0 ceph-mon[74928]: pgmap v140: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 192 B/s, 9 objects/s recovering
Nov 26 11:41:24 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 26 11:41:24 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 26 11:41:24 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 26 11:41:24 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 26 11:41:24 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Nov 26 11:41:24 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Nov 26 11:41:24 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 69 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=68/69 n=6 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68) [2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:24 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 69 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=68/69 n=5 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68) [2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:24 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 69 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=55/56 n=1 ec=43/21 lis/c=55/55 les/c/f=56/56/0 sis=69 pruub=8.492801666s) [0] r=-1 lpr=69 pi=[55,69)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 128.236526489s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:24 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 69 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=55/56 n=1 ec=43/21 lis/c=55/55 les/c/f=56/56/0 sis=69 pruub=8.492767334s) [0] r=-1 lpr=69 pi=[55,69)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.236526489s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:24 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 69 pg[6.a( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=55/55 les/c/f=56/56/0 sis=69) [0] r=0 lpr=69 pi=[55,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:24 compute-0 sshd-session[104706]: Accepted publickey for zuul from 192.168.122.30 port 32924 ssh2: ECDSA SHA256:Ri1FttUDYah1mI7cQP4QF8tE4Jqe/KzhJvhCRDggR5A
Nov 26 11:41:24 compute-0 systemd-logind[744]: New session 34 of user zuul.
Nov 26 11:41:24 compute-0 systemd[1]: Started Session 34 of User zuul.
Nov 26 11:41:24 compute-0 sshd-session[104706]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 26 11:41:24 compute-0 python3.9[104859]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 26 11:41:24 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Nov 26 11:41:24 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Nov 26 11:41:25 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Nov 26 11:41:25 compute-0 ceph-mon[74928]: 3.8 scrub starts
Nov 26 11:41:25 compute-0 ceph-mon[74928]: 3.8 scrub ok
Nov 26 11:41:25 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 26 11:41:25 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 26 11:41:25 compute-0 ceph-mon[74928]: osdmap e69: 3 total, 3 up, 3 in
Nov 26 11:41:25 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Nov 26 11:41:25 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Nov 26 11:41:25 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 70 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=69/70 n=1 ec=43/21 lis/c=55/55 les/c/f=56/56/0 sis=69) [0] r=0 lpr=69 pi=[55,69)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:25 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v143: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 4 objects/s recovering
Nov 26 11:41:25 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} v 0) v1
Nov 26 11:41:25 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 26 11:41:25 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0) v1
Nov 26 11:41:25 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 26 11:41:25 compute-0 python3.9[105033]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 11:41:26 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Nov 26 11:41:26 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Nov 26 11:41:26 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Nov 26 11:41:26 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 26 11:41:26 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 26 11:41:26 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Nov 26 11:41:26 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Nov 26 11:41:26 compute-0 ceph-mon[74928]: 3.16 scrub starts
Nov 26 11:41:26 compute-0 ceph-mon[74928]: 3.16 scrub ok
Nov 26 11:41:26 compute-0 ceph-mon[74928]: osdmap e70: 3 total, 3 up, 3 in
Nov 26 11:41:26 compute-0 ceph-mon[74928]: pgmap v143: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 4 objects/s recovering
Nov 26 11:41:26 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 26 11:41:26 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 26 11:41:26 compute-0 sudo[105187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccssqvetbbjavqzpliyxrbqyrqtemlwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157286.082776-45-15452933572566/AnsiballZ_command.py'
Nov 26 11:41:26 compute-0 sudo[105187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:41:26 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e71 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:41:26 compute-0 python3.9[105189]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:41:26 compute-0 sudo[105187]: pam_unix(sudo:session): session closed for user root
Nov 26 11:41:26 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 3.a scrub starts
Nov 26 11:41:26 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 3.a scrub ok
Nov 26 11:41:27 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Nov 26 11:41:27 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Nov 26 11:41:27 compute-0 sudo[105340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpgifeaknyvvwlajdrbmlxfcckwrjrvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157286.7832384-57-239685657560465/AnsiballZ_stat.py'
Nov 26 11:41:27 compute-0 sudo[105340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:41:27 compute-0 ceph-mon[74928]: 3.18 scrub starts
Nov 26 11:41:27 compute-0 ceph-mon[74928]: 3.18 scrub ok
Nov 26 11:41:27 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 26 11:41:27 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 26 11:41:27 compute-0 ceph-mon[74928]: osdmap e71: 3 total, 3 up, 3 in
Nov 26 11:41:27 compute-0 ceph-mon[74928]: 3.a scrub starts
Nov 26 11:41:27 compute-0 ceph-mon[74928]: 3.a scrub ok
Nov 26 11:41:27 compute-0 python3.9[105342]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 11:41:27 compute-0 sudo[105340]: pam_unix(sudo:session): session closed for user root
Nov 26 11:41:27 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v145: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:41:27 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} v 0) v1
Nov 26 11:41:27 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 26 11:41:27 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0) v1
Nov 26 11:41:27 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 26 11:41:27 compute-0 sudo[105494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klifitkqrrglhbppkfsdgckzvhndhxrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157287.454069-68-30443639183806/AnsiballZ_file.py'
Nov 26 11:41:27 compute-0 sudo[105494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:41:27 compute-0 python3.9[105496]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:41:27 compute-0 sudo[105494]: pam_unix(sudo:session): session closed for user root
Nov 26 11:41:28 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Nov 26 11:41:28 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 26 11:41:28 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 26 11:41:28 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Nov 26 11:41:28 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Nov 26 11:41:28 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 71 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=71 pruub=8.613817215s) [1] r=-1 lpr=71 pi=[57,71)/1 crt=37'39 mlcod 37'39 active pruub 135.751876831s@ mbc={255={}}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:28 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 72 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=71 pruub=8.613759995s) [1] r=-1 lpr=71 pi=[57,71)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 135.751876831s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:28 compute-0 ceph-mon[74928]: 4.1a scrub starts
Nov 26 11:41:28 compute-0 ceph-mon[74928]: 4.1a scrub ok
Nov 26 11:41:28 compute-0 ceph-mon[74928]: pgmap v145: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:41:28 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 26 11:41:28 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 26 11:41:28 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 72 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=71) [1] r=0 lpr=72 pi=[57,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:28 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 72 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=72 pruub=10.479932785s) [2] r=-1 lpr=72 pi=[45,72)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 134.136596680s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:28 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 72 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=72 pruub=10.479909897s) [2] r=-1 lpr=72 pi=[45,72)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.136596680s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:28 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 72 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=72 pruub=10.484605789s) [2] r=-1 lpr=72 pi=[45,72)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 134.142684937s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:28 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 72 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=72 pruub=10.484507561s) [2] r=-1 lpr=72 pi=[45,72)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.142684937s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:28 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 72 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=72) [2] r=0 lpr=72 pi=[45,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:28 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 72 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=72) [2] r=0 lpr=72 pi=[45,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:28 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 5.c scrub starts
Nov 26 11:41:28 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 5.c scrub ok
Nov 26 11:41:28 compute-0 sudo[105646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecnqcppykwcqatfqznadvrzwoiicwaww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157288.08875-77-21070417116243/AnsiballZ_file.py'
Nov 26 11:41:28 compute-0 sudo[105646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:41:28 compute-0 python3.9[105648]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:41:28 compute-0 sudo[105646]: pam_unix(sudo:session): session closed for user root
Nov 26 11:41:28 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Nov 26 11:41:28 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Nov 26 11:41:28 compute-0 python3.9[105798]: ansible-ansible.builtin.service_facts Invoked
Nov 26 11:41:29 compute-0 network[105815]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 26 11:41:29 compute-0 network[105816]: 'network-scripts' will be removed from distribution in near future.
Nov 26 11:41:29 compute-0 network[105817]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 26 11:41:29 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 4.1 deep-scrub starts
Nov 26 11:41:29 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 4.1 deep-scrub ok
Nov 26 11:41:29 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Nov 26 11:41:29 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Nov 26 11:41:29 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Nov 26 11:41:29 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 73 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[45,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:29 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 73 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[45,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:29 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 73 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[45,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:29 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 73 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[45,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:29 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 26 11:41:29 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 26 11:41:29 compute-0 ceph-mon[74928]: osdmap e72: 3 total, 3 up, 3 in
Nov 26 11:41:29 compute-0 ceph-mon[74928]: 5.c scrub starts
Nov 26 11:41:29 compute-0 ceph-mon[74928]: 5.c scrub ok
Nov 26 11:41:29 compute-0 ceph-mon[74928]: 3.6 scrub starts
Nov 26 11:41:29 compute-0 ceph-mon[74928]: 3.6 scrub ok
Nov 26 11:41:29 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 73 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=73) [2]/[1] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:29 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 73 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=73) [2]/[1] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:29 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 73 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=73) [2]/[1] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:29 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 73 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=73) [2]/[1] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:29 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 73 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=71/73 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=71) [1] r=0 lpr=72 pi=[57,71)/1 crt=37'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:29 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v148: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 51 B/s, 2 objects/s recovering
Nov 26 11:41:29 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 3.f scrub starts
Nov 26 11:41:29 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 3.f scrub ok
Nov 26 11:41:30 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 4.e scrub starts
Nov 26 11:41:30 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 4.e scrub ok
Nov 26 11:41:30 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Nov 26 11:41:30 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Nov 26 11:41:30 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Nov 26 11:41:30 compute-0 ceph-mon[74928]: 4.1 deep-scrub starts
Nov 26 11:41:30 compute-0 ceph-mon[74928]: 4.1 deep-scrub ok
Nov 26 11:41:30 compute-0 ceph-mon[74928]: osdmap e73: 3 total, 3 up, 3 in
Nov 26 11:41:30 compute-0 ceph-mon[74928]: pgmap v148: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 51 B/s, 2 objects/s recovering
Nov 26 11:41:30 compute-0 ceph-mon[74928]: 3.f scrub starts
Nov 26 11:41:30 compute-0 ceph-mon[74928]: 3.f scrub ok
Nov 26 11:41:30 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 74 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:30 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 74 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:31 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Nov 26 11:41:31 compute-0 ceph-mon[74928]: 4.e scrub starts
Nov 26 11:41:31 compute-0 ceph-mon[74928]: 4.e scrub ok
Nov 26 11:41:31 compute-0 ceph-mon[74928]: osdmap e74: 3 total, 3 up, 3 in
Nov 26 11:41:31 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Nov 26 11:41:31 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Nov 26 11:41:31 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 75 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=6 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75 pruub=15.000406265s) [2] async=[2] r=-1 lpr=75 pi=[45,75)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 141.680450439s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:31 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 75 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=6 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75 pruub=15.000348091s) [2] r=-1 lpr=75 pi=[45,75)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.680450439s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:31 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 75 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=5 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75 pruub=14.999793053s) [2] async=[2] r=-1 lpr=75 pi=[45,75)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 141.680419922s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:31 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 75 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=5 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75 pruub=14.999548912s) [2] r=-1 lpr=75 pi=[45,75)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.680419922s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:31 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 75 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:31 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 75 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:31 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 75 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:31 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 75 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:31 compute-0 python3.9[106077]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:41:31 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v151: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 2 objects/s recovering
Nov 26 11:41:31 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e75 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:41:31 compute-0 python3.9[106227]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 11:41:32 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Nov 26 11:41:32 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Nov 26 11:41:32 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Nov 26 11:41:32 compute-0 ceph-mon[74928]: osdmap e75: 3 total, 3 up, 3 in
Nov 26 11:41:32 compute-0 ceph-mon[74928]: pgmap v151: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 2 objects/s recovering
Nov 26 11:41:32 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 76 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=75/76 n=5 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:32 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 76 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=75/76 n=6 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:32 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Nov 26 11:41:32 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Nov 26 11:41:32 compute-0 python3.9[106381]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 11:41:33 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 4.11 deep-scrub starts
Nov 26 11:41:33 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 4.11 deep-scrub ok
Nov 26 11:41:33 compute-0 ceph-mon[74928]: osdmap e76: 3 total, 3 up, 3 in
Nov 26 11:41:33 compute-0 ceph-mon[74928]: 5.1d scrub starts
Nov 26 11:41:33 compute-0 ceph-mon[74928]: 5.1d scrub ok
Nov 26 11:41:33 compute-0 sudo[106537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czigwwcfaxvyljbclhwefhtdobgfdprk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157293.1323643-125-103085862687614/AnsiballZ_setup.py'
Nov 26 11:41:33 compute-0 sudo[106537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:41:33 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v153: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 25 B/s, 2 objects/s recovering
Nov 26 11:41:33 compute-0 python3.9[106539]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 11:41:33 compute-0 sudo[106537]: pam_unix(sudo:session): session closed for user root
Nov 26 11:41:33 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 3.c scrub starts
Nov 26 11:41:33 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 3.c scrub ok
Nov 26 11:41:34 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 4.13 deep-scrub starts
Nov 26 11:41:34 compute-0 sudo[106621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxzpwoqzsdsizispjitlaboeenlauugp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157293.1323643-125-103085862687614/AnsiballZ_dnf.py'
Nov 26 11:41:34 compute-0 sudo[106621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:41:34 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 4.13 deep-scrub ok
Nov 26 11:41:34 compute-0 ceph-mon[74928]: 4.11 deep-scrub starts
Nov 26 11:41:34 compute-0 ceph-mon[74928]: 4.11 deep-scrub ok
Nov 26 11:41:34 compute-0 ceph-mon[74928]: pgmap v153: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 25 B/s, 2 objects/s recovering
Nov 26 11:41:34 compute-0 ceph-mon[74928]: 3.c scrub starts
Nov 26 11:41:34 compute-0 ceph-mon[74928]: 3.c scrub ok
Nov 26 11:41:34 compute-0 python3.9[106623]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 11:41:35 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Nov 26 11:41:35 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Nov 26 11:41:35 compute-0 ceph-mon[74928]: 4.13 deep-scrub starts
Nov 26 11:41:35 compute-0 ceph-mon[74928]: 4.13 deep-scrub ok
Nov 26 11:41:35 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v154: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 2 objects/s recovering
Nov 26 11:41:35 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} v 0) v1
Nov 26 11:41:35 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 26 11:41:35 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0) v1
Nov 26 11:41:35 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 26 11:41:36 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Nov 26 11:41:36 compute-0 ceph-mon[74928]: 5.1a scrub starts
Nov 26 11:41:36 compute-0 ceph-mon[74928]: 5.1a scrub ok
Nov 26 11:41:36 compute-0 ceph-mon[74928]: pgmap v154: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 2 objects/s recovering
Nov 26 11:41:36 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 26 11:41:36 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 26 11:41:36 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 26 11:41:36 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 26 11:41:36 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Nov 26 11:41:36 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Nov 26 11:41:36 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e77 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:41:36 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Nov 26 11:41:36 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Nov 26 11:41:36 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Nov 26 11:41:36 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Nov 26 11:41:37 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 26 11:41:37 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 26 11:41:37 compute-0 ceph-mon[74928]: osdmap e77: 3 total, 3 up, 3 in
Nov 26 11:41:37 compute-0 ceph-mon[74928]: 3.3 scrub starts
Nov 26 11:41:37 compute-0 ceph-mon[74928]: 3.3 scrub ok
Nov 26 11:41:37 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v156: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 17 B/s, 2 objects/s recovering
Nov 26 11:41:37 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} v 0) v1
Nov 26 11:41:37 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 26 11:41:37 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0) v1
Nov 26 11:41:37 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 26 11:41:37 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Nov 26 11:41:37 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Nov 26 11:41:38 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Nov 26 11:41:38 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 26 11:41:38 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 26 11:41:38 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Nov 26 11:41:38 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Nov 26 11:41:38 compute-0 ceph-mon[74928]: 4.18 scrub starts
Nov 26 11:41:38 compute-0 ceph-mon[74928]: 4.18 scrub ok
Nov 26 11:41:38 compute-0 ceph-mon[74928]: pgmap v156: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 17 B/s, 2 objects/s recovering
Nov 26 11:41:38 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 26 11:41:38 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 26 11:41:38 compute-0 ceph-mon[74928]: 3.15 scrub starts
Nov 26 11:41:38 compute-0 ceph-mon[74928]: 3.15 scrub ok
Nov 26 11:41:38 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 77 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=60/61 n=1 ec=43/21 lis/c=60/60 les/c/f=61/61/0 sis=77 pruub=8.538226128s) [1] r=-1 lpr=77 pi=[60,77)/1 crt=37'39 mlcod 37'39 active pruub 145.758438110s@ mbc={255={}}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:38 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 77 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=60/61 n=1 ec=43/21 lis/c=60/60 les/c/f=61/61/0 sis=77 pruub=8.538172722s) [1] r=-1 lpr=77 pi=[60,77)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 145.758438110s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:38 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 77 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=60/60 les/c/f=61/61/0 sis=77) [1] r=0 lpr=77 pi=[60,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:39 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Nov 26 11:41:39 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 26 11:41:39 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 26 11:41:39 compute-0 ceph-mon[74928]: osdmap e78: 3 total, 3 up, 3 in
Nov 26 11:41:39 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Nov 26 11:41:39 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Nov 26 11:41:39 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 79 pg[6.d( v 37'39 lc 33'10 (0'0,37'39] local-lis/les=77/79 n=1 ec=43/21 lis/c=60/60 les/c/f=61/61/0 sis=77) [1] r=0 lpr=77 pi=[60,77)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:39 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v159: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Nov 26 11:41:40 compute-0 ceph-mon[74928]: osdmap e79: 3 total, 3 up, 3 in
Nov 26 11:41:40 compute-0 ceph-mon[74928]: pgmap v159: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Nov 26 11:41:41 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Nov 26 11:41:41 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Nov 26 11:41:41 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 5.18 deep-scrub starts
Nov 26 11:41:41 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 5.18 deep-scrub ok
Nov 26 11:41:41 compute-0 ceph-mgr[75197]: [balancer INFO root] Optimize plan auto_2025-11-26_11:41:41
Nov 26 11:41:41 compute-0 ceph-mgr[75197]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 11:41:41 compute-0 ceph-mgr[75197]: [balancer INFO root] Some PGs (0.003279) are inactive; try again later
Nov 26 11:41:41 compute-0 ceph-mon[74928]: 10.3 scrub starts
Nov 26 11:41:41 compute-0 ceph-mon[74928]: 10.3 scrub ok
Nov 26 11:41:41 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v160: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:41:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:41:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:41:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:41:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:41:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 11:41:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 11:41:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:41:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:41:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 11:41:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 11:41:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 11:41:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 11:41:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 11:41:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 11:41:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 11:41:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 11:41:41 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e79 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:41:42 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Nov 26 11:41:42 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Nov 26 11:41:42 compute-0 ceph-mon[74928]: 5.18 deep-scrub starts
Nov 26 11:41:42 compute-0 ceph-mon[74928]: 5.18 deep-scrub ok
Nov 26 11:41:42 compute-0 ceph-mon[74928]: pgmap v160: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:41:42 compute-0 ceph-mon[74928]: 10.5 scrub starts
Nov 26 11:41:42 compute-0 ceph-mon[74928]: 10.5 scrub ok
Nov 26 11:41:43 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 10.a scrub starts
Nov 26 11:41:43 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 10.a scrub ok
Nov 26 11:41:43 compute-0 ceph-mon[74928]: 10.a scrub starts
Nov 26 11:41:43 compute-0 ceph-mon[74928]: 10.a scrub ok
Nov 26 11:41:43 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v161: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:41:43 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Nov 26 11:41:43 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Nov 26 11:41:44 compute-0 ceph-mon[74928]: pgmap v161: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:41:44 compute-0 ceph-mon[74928]: 3.9 scrub starts
Nov 26 11:41:44 compute-0 ceph-mon[74928]: 3.9 scrub ok
Nov 26 11:41:44 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Nov 26 11:41:44 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Nov 26 11:41:44 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 10.c scrub starts
Nov 26 11:41:44 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 10.c scrub ok
Nov 26 11:41:45 compute-0 ceph-mon[74928]: 3.17 scrub starts
Nov 26 11:41:45 compute-0 ceph-mon[74928]: 3.17 scrub ok
Nov 26 11:41:45 compute-0 ceph-mon[74928]: 10.c scrub starts
Nov 26 11:41:45 compute-0 ceph-mon[74928]: 10.c scrub ok
Nov 26 11:41:45 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v162: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 8 B/s, 0 objects/s recovering
Nov 26 11:41:45 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0) v1
Nov 26 11:41:45 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 26 11:41:45 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0) v1
Nov 26 11:41:45 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 26 11:41:46 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Nov 26 11:41:46 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 26 11:41:46 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 26 11:41:46 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Nov 26 11:41:46 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Nov 26 11:41:46 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 80 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=80 pruub=14.336933136s) [2] r=-1 lpr=80 pi=[57,80)/1 crt=37'39 mlcod 37'39 active pruub 159.751373291s@ mbc={255={}}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:46 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 80 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=80 pruub=14.336879730s) [2] r=-1 lpr=80 pi=[57,80)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 159.751373291s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:46 compute-0 ceph-mon[74928]: pgmap v162: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 8 B/s, 0 objects/s recovering
Nov 26 11:41:46 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 26 11:41:46 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 26 11:41:46 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 80 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=80) [2] r=0 lpr=80 pi=[57,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:46 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e80 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:41:47 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Nov 26 11:41:47 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Nov 26 11:41:47 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v164: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 8 B/s, 0 objects/s recovering
Nov 26 11:41:47 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0) v1
Nov 26 11:41:47 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 26 11:41:47 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Nov 26 11:41:47 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 26 11:41:47 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 26 11:41:47 compute-0 ceph-mon[74928]: osdmap e80: 3 total, 3 up, 3 in
Nov 26 11:41:47 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 26 11:41:47 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Nov 26 11:41:47 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Nov 26 11:41:47 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Nov 26 11:41:47 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 81 pg[6.f( v 37'39 lc 33'1 (0'0,37'39] local-lis/les=80/81 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=80) [2] r=0 lpr=80 pi=[57,80)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:48 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Nov 26 11:41:48 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Nov 26 11:41:48 compute-0 ceph-mon[74928]: 5.19 scrub starts
Nov 26 11:41:48 compute-0 ceph-mon[74928]: 5.19 scrub ok
Nov 26 11:41:48 compute-0 ceph-mon[74928]: pgmap v164: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 8 B/s, 0 objects/s recovering
Nov 26 11:41:48 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Nov 26 11:41:48 compute-0 ceph-mon[74928]: osdmap e81: 3 total, 3 up, 3 in
Nov 26 11:41:49 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v166: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 84 B/s, 0 objects/s recovering
Nov 26 11:41:49 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0) v1
Nov 26 11:41:49 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Nov 26 11:41:49 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Nov 26 11:41:49 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Nov 26 11:41:49 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Nov 26 11:41:49 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Nov 26 11:41:49 compute-0 ceph-mon[74928]: 4.5 scrub starts
Nov 26 11:41:49 compute-0 ceph-mon[74928]: 4.5 scrub ok
Nov 26 11:41:49 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Nov 26 11:41:49 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Nov 26 11:41:49 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Nov 26 11:41:49 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Nov 26 11:41:49 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Nov 26 11:41:50 compute-0 ceph-mon[74928]: pgmap v166: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 84 B/s, 0 objects/s recovering
Nov 26 11:41:50 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Nov 26 11:41:50 compute-0 ceph-mon[74928]: osdmap e82: 3 total, 3 up, 3 in
Nov 26 11:41:50 compute-0 ceph-mon[74928]: 3.12 scrub starts
Nov 26 11:41:50 compute-0 ceph-mon[74928]: 3.12 scrub ok
Nov 26 11:41:50 compute-0 ceph-mon[74928]: 10.18 scrub starts
Nov 26 11:41:50 compute-0 ceph-mon[74928]: 10.18 scrub ok
Nov 26 11:41:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 11:41:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:41:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 11:41:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:41:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:41:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:41:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:41:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:41:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:41:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:41:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:41:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:41:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.359070782053787e-07 of space, bias 4.0, pg target 0.0007630884938464544 quantized to 16 (current 16)
Nov 26 11:41:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:41:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:41:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:41:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 11:41:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:41:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 11:41:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:41:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:41:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:41:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 11:41:50 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Nov 26 11:41:50 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Nov 26 11:41:51 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v168: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 100 B/s, 0 objects/s recovering
Nov 26 11:41:51 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0) v1
Nov 26 11:41:51 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 26 11:41:51 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Nov 26 11:41:51 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 26 11:41:51 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Nov 26 11:41:51 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Nov 26 11:41:51 compute-0 ceph-mon[74928]: 7.1b scrub starts
Nov 26 11:41:51 compute-0 ceph-mon[74928]: 7.1b scrub ok
Nov 26 11:41:51 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 26 11:41:51 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:41:51 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Nov 26 11:41:51 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Nov 26 11:41:52 compute-0 ceph-mon[74928]: pgmap v168: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 100 B/s, 0 objects/s recovering
Nov 26 11:41:52 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 26 11:41:52 compute-0 ceph-mon[74928]: osdmap e83: 3 total, 3 up, 3 in
Nov 26 11:41:52 compute-0 ceph-mon[74928]: 10.1b scrub starts
Nov 26 11:41:52 compute-0 ceph-mon[74928]: 10.1b scrub ok
Nov 26 11:41:53 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v170: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 100 B/s, 0 objects/s recovering
Nov 26 11:41:53 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0) v1
Nov 26 11:41:53 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 26 11:41:53 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Nov 26 11:41:53 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 26 11:41:53 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Nov 26 11:41:53 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 84 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=84 pruub=14.262881279s) [2] r=-1 lpr=84 pi=[53,84)/1 crt=44'389 mlcod 0'0 active pruub 166.709609985s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:53 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 84 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=84 pruub=14.262837410s) [2] r=-1 lpr=84 pi=[53,84)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 166.709609985s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:53 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Nov 26 11:41:53 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 26 11:41:53 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 84 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=84) [2] r=0 lpr=84 pi=[53,84)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:53 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 8.14 deep-scrub starts
Nov 26 11:41:53 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 8.14 deep-scrub ok
Nov 26 11:41:54 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Nov 26 11:41:54 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Nov 26 11:41:54 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Nov 26 11:41:54 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Nov 26 11:41:54 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Nov 26 11:41:54 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 85 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=85) [2]/[0] r=-1 lpr=85 pi=[53,85)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:54 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 85 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=85) [2]/[0] r=-1 lpr=85 pi=[53,85)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:54 compute-0 ceph-mon[74928]: pgmap v170: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 100 B/s, 0 objects/s recovering
Nov 26 11:41:54 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 26 11:41:54 compute-0 ceph-mon[74928]: osdmap e84: 3 total, 3 up, 3 in
Nov 26 11:41:54 compute-0 ceph-mon[74928]: 8.14 deep-scrub starts
Nov 26 11:41:54 compute-0 ceph-mon[74928]: 8.14 deep-scrub ok
Nov 26 11:41:54 compute-0 ceph-mon[74928]: 10.1c scrub starts
Nov 26 11:41:54 compute-0 ceph-mon[74928]: 10.1c scrub ok
Nov 26 11:41:54 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 85 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=85) [2]/[0] r=0 lpr=85 pi=[53,85)/1 crt=44'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:54 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 85 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=85) [2]/[0] r=0 lpr=85 pi=[53,85)/1 crt=44'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:55 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v173: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:41:55 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0) v1
Nov 26 11:41:55 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 26 11:41:55 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Nov 26 11:41:55 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Nov 26 11:41:55 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Nov 26 11:41:55 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Nov 26 11:41:55 compute-0 ceph-mon[74928]: osdmap e85: 3 total, 3 up, 3 in
Nov 26 11:41:55 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 26 11:41:55 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 86 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=85/86 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=85) [2]/[0] async=[2] r=0 lpr=85 pi=[53,85)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:55 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Nov 26 11:41:55 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Nov 26 11:41:56 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Nov 26 11:41:56 compute-0 ceph-mon[74928]: pgmap v173: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:41:56 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Nov 26 11:41:56 compute-0 ceph-mon[74928]: osdmap e86: 3 total, 3 up, 3 in
Nov 26 11:41:56 compute-0 ceph-mon[74928]: 4.7 scrub starts
Nov 26 11:41:56 compute-0 ceph-mon[74928]: 4.7 scrub ok
Nov 26 11:41:56 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Nov 26 11:41:56 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Nov 26 11:41:56 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 87 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=85/86 n=5 ec=45/34 lis/c=85/53 les/c/f=86/54/0 sis=87 pruub=14.991056442s) [2] async=[2] r=-1 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 44'389 active pruub 170.465728760s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:56 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 87 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=85/86 n=5 ec=45/34 lis/c=85/53 les/c/f=86/54/0 sis=87 pruub=14.990988731s) [2] r=-1 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 170.465728760s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:56 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 87 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=85/53 les/c/f=86/54/0 sis=87) [2] r=0 lpr=87 pi=[53,87)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:56 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 87 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=85/53 les/c/f=86/54/0 sis=87) [2] r=0 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:56 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 4.9 deep-scrub starts
Nov 26 11:41:56 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 4.9 deep-scrub ok
Nov 26 11:41:56 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:41:57 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Nov 26 11:41:57 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Nov 26 11:41:57 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v176: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Nov 26 11:41:57 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0) v1
Nov 26 11:41:57 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 26 11:41:57 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 4.4 deep-scrub starts
Nov 26 11:41:57 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 4.4 deep-scrub ok
Nov 26 11:41:57 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Nov 26 11:41:57 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 26 11:41:57 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Nov 26 11:41:57 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Nov 26 11:41:57 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 88 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=87/88 n=5 ec=45/34 lis/c=85/53 les/c/f=86/54/0 sis=87) [2] r=0 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:41:57 compute-0 ceph-mon[74928]: osdmap e87: 3 total, 3 up, 3 in
Nov 26 11:41:57 compute-0 ceph-mon[74928]: 4.9 deep-scrub starts
Nov 26 11:41:57 compute-0 ceph-mon[74928]: 4.9 deep-scrub ok
Nov 26 11:41:57 compute-0 ceph-mon[74928]: 10.1d scrub starts
Nov 26 11:41:57 compute-0 ceph-mon[74928]: 10.1d scrub ok
Nov 26 11:41:57 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 26 11:41:57 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 88 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=88 pruub=9.820838928s) [1] r=-1 lpr=88 pi=[53,88)/1 crt=44'389 mlcod 0'0 active pruub 166.708480835s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:57 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 88 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=88 pruub=9.820796967s) [1] r=-1 lpr=88 pi=[53,88)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 166.708480835s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:57 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 88 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=88) [1] r=0 lpr=88 pi=[53,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:58 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Nov 26 11:41:58 compute-0 ceph-mon[74928]: pgmap v176: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Nov 26 11:41:58 compute-0 ceph-mon[74928]: 4.4 deep-scrub starts
Nov 26 11:41:58 compute-0 ceph-mon[74928]: 4.4 deep-scrub ok
Nov 26 11:41:58 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 26 11:41:58 compute-0 ceph-mon[74928]: osdmap e88: 3 total, 3 up, 3 in
Nov 26 11:41:58 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Nov 26 11:41:58 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Nov 26 11:41:58 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 89 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=89) [1]/[0] r=-1 lpr=89 pi=[53,89)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:58 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 89 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=89) [1]/[0] r=-1 lpr=89 pi=[53,89)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 11:41:58 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 89 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=89) [1]/[0] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:41:58 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 89 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=89) [1]/[0] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 11:41:59 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v179: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Nov 26 11:41:59 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0) v1
Nov 26 11:41:59 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Nov 26 11:41:59 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Nov 26 11:41:59 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Nov 26 11:41:59 compute-0 ceph-mon[74928]: osdmap e89: 3 total, 3 up, 3 in
Nov 26 11:41:59 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Nov 26 11:41:59 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Nov 26 11:41:59 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Nov 26 11:41:59 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Nov 26 11:41:59 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Nov 26 11:42:00 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Nov 26 11:42:00 compute-0 ceph-mon[74928]: pgmap v179: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Nov 26 11:42:00 compute-0 ceph-mon[74928]: 4.8 scrub starts
Nov 26 11:42:00 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Nov 26 11:42:00 compute-0 ceph-mon[74928]: osdmap e90: 3 total, 3 up, 3 in
Nov 26 11:42:00 compute-0 ceph-mon[74928]: 4.8 scrub ok
Nov 26 11:42:00 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 90 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=89/90 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=89) [1]/[0] async=[1] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:42:00 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Nov 26 11:42:01 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v181: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 22 B/s, 0 objects/s recovering
Nov 26 11:42:01 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0) v1
Nov 26 11:42:01 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 26 11:42:01 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 4.d scrub starts
Nov 26 11:42:01 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Nov 26 11:42:01 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Nov 26 11:42:01 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Nov 26 11:42:01 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Nov 26 11:42:01 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 4.d scrub ok
Nov 26 11:42:01 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 91 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=89/90 n=5 ec=45/34 lis/c=89/53 les/c/f=90/54/0 sis=91 pruub=14.996270180s) [1] async=[1] r=-1 lpr=91 pi=[53,91)/1 crt=44'389 mlcod 44'389 active pruub 175.496398926s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:42:01 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 91 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=89/90 n=5 ec=45/34 lis/c=89/53 les/c/f=90/54/0 sis=91 pruub=14.995950699s) [1] r=-1 lpr=91 pi=[53,91)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 175.496398926s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:42:01 compute-0 ceph-mon[74928]: 4.2 scrub starts
Nov 26 11:42:01 compute-0 ceph-mon[74928]: 4.2 scrub ok
Nov 26 11:42:01 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 26 11:42:01 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 91 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=89/53 les/c/f=90/54/0 sis=91) [1] r=0 lpr=91 pi=[53,91)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:42:01 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 91 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=89/53 les/c/f=90/54/0 sis=91) [1] r=0 lpr=91 pi=[53,91)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:42:01 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:42:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 90 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=90 pruub=15.770151138s) [0] r=-1 lpr=90 pi=[66,90)/1 crt=44'389 mlcod 0'0 active pruub 169.793777466s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:42:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 91 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=90 pruub=15.770002365s) [0] r=-1 lpr=90 pi=[66,90)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 169.793777466s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:42:02 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 91 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=90) [0] r=0 lpr=91 pi=[66,90)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:42:02 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Nov 26 11:42:02 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Nov 26 11:42:02 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Nov 26 11:42:02 compute-0 ceph-mon[74928]: pgmap v181: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 22 B/s, 0 objects/s recovering
Nov 26 11:42:02 compute-0 ceph-mon[74928]: 4.d scrub starts
Nov 26 11:42:02 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Nov 26 11:42:02 compute-0 ceph-mon[74928]: osdmap e91: 3 total, 3 up, 3 in
Nov 26 11:42:02 compute-0 ceph-mon[74928]: 4.d scrub ok
Nov 26 11:42:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 92 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=92) [0]/[2] r=0 lpr=92 pi=[66,92)/2 crt=44'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:42:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 92 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=92) [0]/[2] r=0 lpr=92 pi=[66,92)/2 crt=44'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 11:42:02 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 92 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=92) [0]/[2] r=-1 lpr=92 pi=[66,92)/2 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:42:02 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 92 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=92) [0]/[2] r=-1 lpr=92 pi=[66,92)/2 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 11:42:02 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 92 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=91/92 n=5 ec=45/34 lis/c=89/53 les/c/f=90/54/0 sis=91) [1] r=0 lpr=91 pi=[53,91)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:42:02 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Nov 26 11:42:02 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Nov 26 11:42:03 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v184: 305 pgs: 1 remapped+peering, 1 active+remapped, 303 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 22 B/s, 0 objects/s recovering
Nov 26 11:42:03 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Nov 26 11:42:03 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Nov 26 11:42:03 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Nov 26 11:42:03 compute-0 ceph-mon[74928]: osdmap e92: 3 total, 3 up, 3 in
Nov 26 11:42:03 compute-0 ceph-mon[74928]: 11.14 scrub starts
Nov 26 11:42:03 compute-0 ceph-mon[74928]: 11.14 scrub ok
Nov 26 11:42:03 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 93 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=92/93 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=92) [0]/[2] async=[0] r=0 lpr=92 pi=[66,92)/2 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:42:03 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Nov 26 11:42:03 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Nov 26 11:42:04 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Nov 26 11:42:04 compute-0 ceph-mon[74928]: pgmap v184: 305 pgs: 1 remapped+peering, 1 active+remapped, 303 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 22 B/s, 0 objects/s recovering
Nov 26 11:42:04 compute-0 ceph-mon[74928]: osdmap e93: 3 total, 3 up, 3 in
Nov 26 11:42:04 compute-0 ceph-mon[74928]: 10.1f scrub starts
Nov 26 11:42:04 compute-0 ceph-mon[74928]: 10.1f scrub ok
Nov 26 11:42:04 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Nov 26 11:42:04 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Nov 26 11:42:04 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=92/66 les/c/f=93/67/0 sis=94) [0] r=0 lpr=94 pi=[66,94)/2 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:42:04 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=92/66 les/c/f=93/67/0 sis=94) [0] r=0 lpr=94 pi=[66,94)/2 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:42:04 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=92/93 n=5 ec=45/34 lis/c=92/66 les/c/f=93/67/0 sis=94 pruub=15.303225517s) [0] async=[0] r=-1 lpr=94 pi=[66,94)/2 crt=44'389 mlcod 44'389 active pruub 171.510635376s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:42:04 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=92/93 n=5 ec=45/34 lis/c=92/66 les/c/f=93/67/0 sis=94 pruub=15.303150177s) [0] r=-1 lpr=94 pi=[66,94)/2 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 171.510635376s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:42:04 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Nov 26 11:42:04 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Nov 26 11:42:05 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v187: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:42:05 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 4.f deep-scrub starts
Nov 26 11:42:05 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 4.f deep-scrub ok
Nov 26 11:42:05 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Nov 26 11:42:05 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Nov 26 11:42:05 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Nov 26 11:42:05 compute-0 ceph-mon[74928]: osdmap e94: 3 total, 3 up, 3 in
Nov 26 11:42:05 compute-0 ceph-mon[74928]: 7.1a scrub starts
Nov 26 11:42:05 compute-0 ceph-mon[74928]: 7.1a scrub ok
Nov 26 11:42:05 compute-0 ceph-mon[74928]: 4.f deep-scrub starts
Nov 26 11:42:05 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 95 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=92/66 les/c/f=93/67/0 sis=94) [0] r=0 lpr=94 pi=[66,94)/2 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:42:05 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Nov 26 11:42:05 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Nov 26 11:42:06 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:42:06 compute-0 ceph-mon[74928]: pgmap v187: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:42:06 compute-0 ceph-mon[74928]: 4.f deep-scrub ok
Nov 26 11:42:06 compute-0 ceph-mon[74928]: osdmap e95: 3 total, 3 up, 3 in
Nov 26 11:42:06 compute-0 ceph-mon[74928]: 11.15 scrub starts
Nov 26 11:42:06 compute-0 ceph-mon[74928]: 11.15 scrub ok
Nov 26 11:42:07 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v189: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 22 B/s, 0 objects/s recovering
Nov 26 11:42:07 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0) v1
Nov 26 11:42:07 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Nov 26 11:42:07 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Nov 26 11:42:07 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Nov 26 11:42:07 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Nov 26 11:42:07 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Nov 26 11:42:07 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Nov 26 11:42:07 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Nov 26 11:42:07 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Nov 26 11:42:08 compute-0 ceph-mon[74928]: pgmap v189: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 22 B/s, 0 objects/s recovering
Nov 26 11:42:08 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Nov 26 11:42:08 compute-0 ceph-mon[74928]: osdmap e96: 3 total, 3 up, 3 in
Nov 26 11:42:08 compute-0 ceph-mon[74928]: 8.15 scrub starts
Nov 26 11:42:08 compute-0 ceph-mon[74928]: 8.15 scrub ok
Nov 26 11:42:09 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v191: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Nov 26 11:42:09 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0) v1
Nov 26 11:42:09 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Nov 26 11:42:09 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Nov 26 11:42:09 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Nov 26 11:42:09 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Nov 26 11:42:09 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Nov 26 11:42:09 compute-0 ceph-mon[74928]: 4.10 scrub starts
Nov 26 11:42:09 compute-0 ceph-mon[74928]: 4.10 scrub ok
Nov 26 11:42:09 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Nov 26 11:42:09 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Nov 26 11:42:09 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Nov 26 11:42:10 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Nov 26 11:42:10 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Nov 26 11:42:10 compute-0 ceph-mon[74928]: pgmap v191: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Nov 26 11:42:10 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Nov 26 11:42:10 compute-0 ceph-mon[74928]: osdmap e97: 3 total, 3 up, 3 in
Nov 26 11:42:10 compute-0 ceph-mon[74928]: 4.12 scrub starts
Nov 26 11:42:10 compute-0 ceph-mon[74928]: 4.12 scrub ok
Nov 26 11:42:11 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v193: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Nov 26 11:42:11 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0) v1
Nov 26 11:42:11 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 26 11:42:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:42:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:42:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:42:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:42:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:42:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:42:11 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e97 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:42:11 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Nov 26 11:42:11 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 26 11:42:11 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 26 11:42:11 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Nov 26 11:42:11 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Nov 26 11:42:11 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 97 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=97 pruub=12.099705696s) [2] r=-1 lpr=97 pi=[53,97)/1 crt=44'389 mlcod 0'0 active pruub 182.709518433s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:42:11 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 98 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=97 pruub=12.099660873s) [2] r=-1 lpr=97 pi=[53,97)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 182.709518433s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:42:11 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 98 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=97) [2] r=0 lpr=98 pi=[53,97)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:42:11 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Nov 26 11:42:11 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Nov 26 11:42:12 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Nov 26 11:42:12 compute-0 ceph-mon[74928]: pgmap v193: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Nov 26 11:42:12 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 26 11:42:12 compute-0 ceph-mon[74928]: osdmap e98: 3 total, 3 up, 3 in
Nov 26 11:42:12 compute-0 ceph-mon[74928]: 7.18 scrub starts
Nov 26 11:42:12 compute-0 ceph-mon[74928]: 7.18 scrub ok
Nov 26 11:42:12 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Nov 26 11:42:12 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Nov 26 11:42:12 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 99 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=99) [2]/[0] r=0 lpr=99 pi=[53,99)/1 crt=44'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:42:12 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 99 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=99) [2]/[0] r=0 lpr=99 pi=[53,99)/1 crt=44'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 11:42:12 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=99) [2]/[0] r=-1 lpr=99 pi=[53,99)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:42:12 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=99) [2]/[0] r=-1 lpr=99 pi=[53,99)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 11:42:12 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 11.3 deep-scrub starts
Nov 26 11:42:12 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 11.3 deep-scrub ok
Nov 26 11:42:13 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v196: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:42:13 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 26 11:42:13 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 26 11:42:13 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 4.14 deep-scrub starts
Nov 26 11:42:13 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 4.14 deep-scrub ok
Nov 26 11:42:13 compute-0 sudo[106621]: pam_unix(sudo:session): session closed for user root
Nov 26 11:42:13 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Nov 26 11:42:13 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 26 11:42:13 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Nov 26 11:42:13 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Nov 26 11:42:13 compute-0 ceph-mon[74928]: osdmap e99: 3 total, 3 up, 3 in
Nov 26 11:42:13 compute-0 ceph-mon[74928]: 11.3 deep-scrub starts
Nov 26 11:42:13 compute-0 ceph-mon[74928]: 11.3 deep-scrub ok
Nov 26 11:42:13 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 26 11:42:13 compute-0 ceph-mon[74928]: 4.14 deep-scrub starts
Nov 26 11:42:13 compute-0 ceph-mon[74928]: 4.14 deep-scrub ok
Nov 26 11:42:13 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 100 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=99/100 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=99) [2]/[0] async=[2] r=0 lpr=99 pi=[53,99)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:42:13 compute-0 sudo[106923]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvsfpddqspssgjyazbhmzxaroccpnnbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157333.6393495-137-20204764574985/AnsiballZ_command.py'
Nov 26 11:42:13 compute-0 sudo[106923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:42:13 compute-0 python3.9[106925]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:42:14 compute-0 sudo[106923]: pam_unix(sudo:session): session closed for user root
Nov 26 11:42:14 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Nov 26 11:42:14 compute-0 ceph-mon[74928]: pgmap v196: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:42:14 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 26 11:42:14 compute-0 ceph-mon[74928]: osdmap e100: 3 total, 3 up, 3 in
Nov 26 11:42:14 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Nov 26 11:42:14 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Nov 26 11:42:14 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 101 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=99/53 les/c/f=100/54/0 sis=101) [2] r=0 lpr=101 pi=[53,101)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:42:14 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 101 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=99/53 les/c/f=100/54/0 sis=101) [2] r=0 lpr=101 pi=[53,101)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:42:14 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 101 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=99/100 n=5 ec=45/34 lis/c=99/53 les/c/f=100/54/0 sis=101 pruub=15.224581718s) [2] async=[2] r=-1 lpr=101 pi=[53,101)/1 crt=44'389 mlcod 44'389 active pruub 188.791946411s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:42:14 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 101 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=99/100 n=5 ec=45/34 lis/c=99/53 les/c/f=100/54/0 sis=101 pruub=15.224447250s) [2] r=-1 lpr=101 pi=[53,101)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 188.791946411s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:42:15 compute-0 sudo[107210]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzfhjjsujarxlgawfxvxcjqqpwwrgaea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157334.6011713-145-212113369910462/AnsiballZ_selinux.py'
Nov 26 11:42:15 compute-0 sudo[107210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:42:15 compute-0 python3.9[107212]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 26 11:42:15 compute-0 sudo[107210]: pam_unix(sudo:session): session closed for user root
Nov 26 11:42:15 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v199: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:42:15 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0) v1
Nov 26 11:42:15 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 26 11:42:15 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Nov 26 11:42:15 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Nov 26 11:42:15 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Nov 26 11:42:15 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Nov 26 11:42:15 compute-0 ceph-mon[74928]: osdmap e101: 3 total, 3 up, 3 in
Nov 26 11:42:15 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 26 11:42:15 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 102 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=75/76 n=5 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=102 pruub=12.600892067s) [0] r=-1 lpr=102 pi=[75,102)/1 crt=44'389 mlcod 0'0 active pruub 179.860031128s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:42:15 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 102 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=75/76 n=5 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=102 pruub=12.600855827s) [0] r=-1 lpr=102 pi=[75,102)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 179.860031128s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:42:15 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 102 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=102) [0] r=0 lpr=102 pi=[75,102)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:42:15 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=101/102 n=5 ec=45/34 lis/c=99/53 les/c/f=100/54/0 sis=101) [2] r=0 lpr=101 pi=[53,101)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:42:15 compute-0 sudo[107362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrykpcmmigphjkukvgsqaeodkygnzuno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157335.5635982-156-22096789330868/AnsiballZ_command.py'
Nov 26 11:42:15 compute-0 sudo[107362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:42:15 compute-0 python3.9[107364]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 26 11:42:15 compute-0 sudo[107362]: pam_unix(sudo:session): session closed for user root
Nov 26 11:42:16 compute-0 sudo[107514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tznxszqjqmtgsgdwzaeosuocpqqsbwsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157336.0356627-164-107830652455536/AnsiballZ_file.py'
Nov 26 11:42:16 compute-0 sudo[107514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:42:16 compute-0 python3.9[107516]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:42:16 compute-0 sudo[107514]: pam_unix(sudo:session): session closed for user root
Nov 26 11:42:16 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:42:16 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Nov 26 11:42:16 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Nov 26 11:42:16 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Nov 26 11:42:16 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 103 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=103) [0]/[2] r=-1 lpr=103 pi=[75,103)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:42:16 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 103 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=103) [0]/[2] r=-1 lpr=103 pi=[75,103)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 11:42:16 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 103 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=75/76 n=5 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=103) [0]/[2] r=0 lpr=103 pi=[75,103)/1 crt=44'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:42:16 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 103 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=75/76 n=5 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=103) [0]/[2] r=0 lpr=103 pi=[75,103)/1 crt=44'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 11:42:16 compute-0 ceph-mon[74928]: pgmap v199: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:42:16 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Nov 26 11:42:16 compute-0 ceph-mon[74928]: osdmap e102: 3 total, 3 up, 3 in
Nov 26 11:42:16 compute-0 ceph-mon[74928]: osdmap e103: 3 total, 3 up, 3 in
Nov 26 11:42:16 compute-0 sudo[107666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouusfjasvyrzmftvbipbubtyohuhdqfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157336.5019934-172-2055397609007/AnsiballZ_mount.py'
Nov 26 11:42:16 compute-0 sudo[107666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:42:17 compute-0 python3.9[107668]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 26 11:42:17 compute-0 sudo[107666]: pam_unix(sudo:session): session closed for user root
Nov 26 11:42:17 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v202: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:42:17 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Nov 26 11:42:17 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Nov 26 11:42:17 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Nov 26 11:42:17 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 104 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=103/104 n=5 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=103) [0]/[2] async=[0] r=0 lpr=103 pi=[75,103)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:42:17 compute-0 sudo[107818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvilglcoggsbvvrfgqycucbbkewunnei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157337.7474427-200-199643700854700/AnsiballZ_file.py'
Nov 26 11:42:17 compute-0 sudo[107818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:42:18 compute-0 python3.9[107820]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:42:18 compute-0 sudo[107818]: pam_unix(sudo:session): session closed for user root
Nov 26 11:42:18 compute-0 sudo[107970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sugznzaqilwzokhgdvbymndinphownxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157338.2114084-208-986510337439/AnsiballZ_stat.py'
Nov 26 11:42:18 compute-0 sudo[107970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:42:18 compute-0 ceph-mon[74928]: pgmap v202: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:42:18 compute-0 ceph-mon[74928]: osdmap e104: 3 total, 3 up, 3 in
Nov 26 11:42:18 compute-0 python3.9[107972]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:42:18 compute-0 sudo[107970]: pam_unix(sudo:session): session closed for user root
Nov 26 11:42:18 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Nov 26 11:42:18 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Nov 26 11:42:18 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Nov 26 11:42:18 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=103/104 n=5 ec=45/34 lis/c=103/75 les/c/f=104/76/0 sis=105 pruub=15.217405319s) [0] async=[0] r=-1 lpr=105 pi=[75,105)/1 crt=44'389 mlcod 44'389 active pruub 185.484176636s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:42:18 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=103/104 n=5 ec=45/34 lis/c=103/75 les/c/f=104/76/0 sis=105 pruub=15.217340469s) [0] r=-1 lpr=105 pi=[75,105)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 185.484176636s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:42:18 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=103/75 les/c/f=104/76/0 sis=105) [0] r=0 lpr=105 pi=[75,105)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:42:18 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=103/75 les/c/f=104/76/0 sis=105) [0] r=0 lpr=105 pi=[75,105)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:42:18 compute-0 sudo[108048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqmaymwwcsnglirgknlbirodpdnhzjsi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157338.2114084-208-986510337439/AnsiballZ_file.py'
Nov 26 11:42:18 compute-0 sudo[108048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:42:18 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 7.1f deep-scrub starts
Nov 26 11:42:18 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 7.1f deep-scrub ok
Nov 26 11:42:18 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 7.c scrub starts
Nov 26 11:42:18 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 7.c scrub ok
Nov 26 11:42:18 compute-0 python3.9[108050]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:42:18 compute-0 sudo[108048]: pam_unix(sudo:session): session closed for user root
Nov 26 11:42:19 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v205: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 3 objects/s recovering
Nov 26 11:42:19 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0) v1
Nov 26 11:42:19 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 26 11:42:19 compute-0 sudo[108200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pucyrlslduawvyuqtzxwryomfwmwynwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157339.2777972-229-15107931726843/AnsiballZ_stat.py'
Nov 26 11:42:19 compute-0 sudo[108200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:42:19 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Nov 26 11:42:19 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 26 11:42:19 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Nov 26 11:42:19 compute-0 ceph-mon[74928]: osdmap e105: 3 total, 3 up, 3 in
Nov 26 11:42:19 compute-0 ceph-mon[74928]: 7.1f deep-scrub starts
Nov 26 11:42:19 compute-0 ceph-mon[74928]: 7.1f deep-scrub ok
Nov 26 11:42:19 compute-0 ceph-mon[74928]: 7.c scrub starts
Nov 26 11:42:19 compute-0 ceph-mon[74928]: 7.c scrub ok
Nov 26 11:42:19 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 26 11:42:19 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Nov 26 11:42:19 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 106 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=103/75 les/c/f=104/76/0 sis=105) [0] r=0 lpr=105 pi=[75,105)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:42:19 compute-0 python3.9[108202]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 11:42:19 compute-0 sudo[108200]: pam_unix(sudo:session): session closed for user root
Nov 26 11:42:19 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Nov 26 11:42:19 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Nov 26 11:42:20 compute-0 sudo[108354]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmohotncpthsgwslrbmxwmwaqtrwklar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157340.0097961-242-206329861703843/AnsiballZ_getent.py'
Nov 26 11:42:20 compute-0 sudo[108354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:42:20 compute-0 python3.9[108356]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 26 11:42:20 compute-0 sudo[108354]: pam_unix(sudo:session): session closed for user root
Nov 26 11:42:20 compute-0 ceph-mon[74928]: pgmap v205: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 3 objects/s recovering
Nov 26 11:42:20 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 26 11:42:20 compute-0 ceph-mon[74928]: osdmap e106: 3 total, 3 up, 3 in
Nov 26 11:42:20 compute-0 ceph-mon[74928]: 8.10 scrub starts
Nov 26 11:42:20 compute-0 ceph-mon[74928]: 8.10 scrub ok
Nov 26 11:42:20 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Nov 26 11:42:20 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Nov 26 11:42:20 compute-0 sudo[108507]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wunwtusfmghizdaqzgkeenkbmphxitkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157340.665064-252-139664016341291/AnsiballZ_getent.py'
Nov 26 11:42:20 compute-0 sudo[108507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:42:20 compute-0 python3.9[108509]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 26 11:42:21 compute-0 sudo[108507]: pam_unix(sudo:session): session closed for user root
Nov 26 11:42:21 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v207: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 209 B/s wr, 6 op/s; 0 B/s, 4 objects/s recovering
Nov 26 11:42:21 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0) v1
Nov 26 11:42:21 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Nov 26 11:42:21 compute-0 sudo[108660]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efzkleevutnnegolvnppejmflmluungd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157341.1316872-260-79128108462571/AnsiballZ_group.py'
Nov 26 11:42:21 compute-0 sudo[108660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:42:21 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:42:21 compute-0 sudo[108663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:42:21 compute-0 sudo[108663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:42:21 compute-0 sudo[108663]: pam_unix(sudo:session): session closed for user root
Nov 26 11:42:21 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Nov 26 11:42:21 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Nov 26 11:42:21 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Nov 26 11:42:21 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Nov 26 11:42:21 compute-0 ceph-mon[74928]: 8.2 scrub starts
Nov 26 11:42:21 compute-0 ceph-mon[74928]: 8.2 scrub ok
Nov 26 11:42:21 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Nov 26 11:42:21 compute-0 sudo[108688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:42:21 compute-0 sudo[108688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:42:21 compute-0 python3.9[108662]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 26 11:42:21 compute-0 sudo[108688]: pam_unix(sudo:session): session closed for user root
Nov 26 11:42:21 compute-0 sudo[108660]: pam_unix(sudo:session): session closed for user root
Nov 26 11:42:21 compute-0 sudo[108713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:42:21 compute-0 sudo[108713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:42:21 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Nov 26 11:42:21 compute-0 sudo[108713]: pam_unix(sudo:session): session closed for user root
Nov 26 11:42:21 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Nov 26 11:42:21 compute-0 sudo[108746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 26 11:42:21 compute-0 sudo[108746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:42:21 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 11.d deep-scrub starts
Nov 26 11:42:21 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 11.d deep-scrub ok
Nov 26 11:42:21 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 11.f scrub starts
Nov 26 11:42:21 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 11.f scrub ok
Nov 26 11:42:22 compute-0 sudo[108746]: pam_unix(sudo:session): session closed for user root
Nov 26 11:42:22 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 11:42:22 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:42:22 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 11:42:22 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 11:42:22 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 11:42:22 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:42:22 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 173b5cbf-0860-44ea-9125-ea0e877ac5ba does not exist
Nov 26 11:42:22 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 99f95c24-6a9b-4a61-9f71-0e9641f1fed3 does not exist
Nov 26 11:42:22 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev a3c88b38-77e4-44fb-a0fe-e5b97a9d50c0 does not exist
Nov 26 11:42:22 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 107 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=107 pruub=12.069294930s) [0] r=-1 lpr=107 pi=[66,107)/1 crt=44'389 mlcod 0'0 active pruub 185.794097900s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:42:22 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 107 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=107 pruub=12.069259644s) [0] r=-1 lpr=107 pi=[66,107)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 185.794097900s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:42:22 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 107 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=107) [0] r=0 lpr=107 pi=[66,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:42:22 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 11:42:22 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 11:42:22 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 11:42:22 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 11:42:22 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 11:42:22 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:42:22 compute-0 sudo[108940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgqiszvzlowhbvlyvwjzjkaqzpiimcgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157341.8529656-269-68430777691866/AnsiballZ_file.py'
Nov 26 11:42:22 compute-0 sudo[108940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:42:22 compute-0 sudo[108942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:42:22 compute-0 sudo[108942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:42:22 compute-0 sudo[108942]: pam_unix(sudo:session): session closed for user root
Nov 26 11:42:22 compute-0 sudo[108968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:42:22 compute-0 sudo[108968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:42:22 compute-0 sudo[108968]: pam_unix(sudo:session): session closed for user root
Nov 26 11:42:22 compute-0 sudo[108993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:42:22 compute-0 sudo[108993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:42:22 compute-0 sudo[108993]: pam_unix(sudo:session): session closed for user root
Nov 26 11:42:22 compute-0 sudo[109018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 26 11:42:22 compute-0 sudo[109018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:42:22 compute-0 python3.9[108944]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 26 11:42:22 compute-0 sudo[108940]: pam_unix(sudo:session): session closed for user root
Nov 26 11:42:22 compute-0 podman[109099]: 2025-11-26 11:42:22.42035694 +0000 UTC m=+0.027326975 container create 6e38f8c4b25f8e69e170751b87b69ba4fa70841c1f94a3bbb599a1ac9f238136 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_morse, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 26 11:42:22 compute-0 systemd[1]: Started libpod-conmon-6e38f8c4b25f8e69e170751b87b69ba4fa70841c1f94a3bbb599a1ac9f238136.scope.
Nov 26 11:42:22 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:42:22 compute-0 podman[109099]: 2025-11-26 11:42:22.476107873 +0000 UTC m=+0.083077918 container init 6e38f8c4b25f8e69e170751b87b69ba4fa70841c1f94a3bbb599a1ac9f238136 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_morse, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 26 11:42:22 compute-0 podman[109099]: 2025-11-26 11:42:22.480241951 +0000 UTC m=+0.087211975 container start 6e38f8c4b25f8e69e170751b87b69ba4fa70841c1f94a3bbb599a1ac9f238136 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_morse, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 26 11:42:22 compute-0 podman[109099]: 2025-11-26 11:42:22.481531254 +0000 UTC m=+0.088501279 container attach 6e38f8c4b25f8e69e170751b87b69ba4fa70841c1f94a3bbb599a1ac9f238136 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_morse, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:42:22 compute-0 infallible_morse[109120]: 167 167
Nov 26 11:42:22 compute-0 systemd[1]: libpod-6e38f8c4b25f8e69e170751b87b69ba4fa70841c1f94a3bbb599a1ac9f238136.scope: Deactivated successfully.
Nov 26 11:42:22 compute-0 podman[109099]: 2025-11-26 11:42:22.48462843 +0000 UTC m=+0.091598455 container died 6e38f8c4b25f8e69e170751b87b69ba4fa70841c1f94a3bbb599a1ac9f238136 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 26 11:42:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-fabf2310869144d87bd40c52853aaddc2329e004902f58b70e6909ca7038d023-merged.mount: Deactivated successfully.
Nov 26 11:42:22 compute-0 podman[109099]: 2025-11-26 11:42:22.504219587 +0000 UTC m=+0.111189612 container remove 6e38f8c4b25f8e69e170751b87b69ba4fa70841c1f94a3bbb599a1ac9f238136 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 26 11:42:22 compute-0 podman[109099]: 2025-11-26 11:42:22.409300718 +0000 UTC m=+0.016270743 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:42:22 compute-0 systemd[1]: libpod-conmon-6e38f8c4b25f8e69e170751b87b69ba4fa70841c1f94a3bbb599a1ac9f238136.scope: Deactivated successfully.
Nov 26 11:42:22 compute-0 ceph-mon[74928]: pgmap v207: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 209 B/s wr, 6 op/s; 0 B/s, 4 objects/s recovering
Nov 26 11:42:22 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Nov 26 11:42:22 compute-0 ceph-mon[74928]: osdmap e107: 3 total, 3 up, 3 in
Nov 26 11:42:22 compute-0 ceph-mon[74928]: 7.7 scrub starts
Nov 26 11:42:22 compute-0 ceph-mon[74928]: 7.7 scrub ok
Nov 26 11:42:22 compute-0 ceph-mon[74928]: 11.d deep-scrub starts
Nov 26 11:42:22 compute-0 ceph-mon[74928]: 11.d deep-scrub ok
Nov 26 11:42:22 compute-0 ceph-mon[74928]: 11.f scrub starts
Nov 26 11:42:22 compute-0 ceph-mon[74928]: 11.f scrub ok
Nov 26 11:42:22 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:42:22 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 11:42:22 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:42:22 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 11:42:22 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 11:42:22 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:42:22 compute-0 podman[109210]: 2025-11-26 11:42:22.613557203 +0000 UTC m=+0.027241557 container create 556913c591b0d9e2162f1528121ef4c7c94fbd1ec0c34ff2fbc1f7f19d0c0c54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 26 11:42:22 compute-0 systemd[1]: Started libpod-conmon-556913c591b0d9e2162f1528121ef4c7c94fbd1ec0c34ff2fbc1f7f19d0c0c54.scope.
Nov 26 11:42:22 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 7.b scrub starts
Nov 26 11:42:22 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 7.b scrub ok
Nov 26 11:42:22 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:42:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba7f9f06d9e93d3d0cd6b662e7fbbfa9bfde0049c98be63eac38fa9cb89361d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:42:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba7f9f06d9e93d3d0cd6b662e7fbbfa9bfde0049c98be63eac38fa9cb89361d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:42:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba7f9f06d9e93d3d0cd6b662e7fbbfa9bfde0049c98be63eac38fa9cb89361d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:42:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba7f9f06d9e93d3d0cd6b662e7fbbfa9bfde0049c98be63eac38fa9cb89361d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:42:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba7f9f06d9e93d3d0cd6b662e7fbbfa9bfde0049c98be63eac38fa9cb89361d7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 11:42:22 compute-0 podman[109210]: 2025-11-26 11:42:22.662028461 +0000 UTC m=+0.075712835 container init 556913c591b0d9e2162f1528121ef4c7c94fbd1ec0c34ff2fbc1f7f19d0c0c54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_moser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 26 11:42:22 compute-0 podman[109210]: 2025-11-26 11:42:22.669597898 +0000 UTC m=+0.083282252 container start 556913c591b0d9e2162f1528121ef4c7c94fbd1ec0c34ff2fbc1f7f19d0c0c54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:42:22 compute-0 podman[109210]: 2025-11-26 11:42:22.670662751 +0000 UTC m=+0.084347125 container attach 556913c591b0d9e2162f1528121ef4c7c94fbd1ec0c34ff2fbc1f7f19d0c0c54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 26 11:42:22 compute-0 sudo[109278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxqerpebgzipvcfmmmkuxfrqtyweyter ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157342.4730594-280-265883324360561/AnsiballZ_dnf.py'
Nov 26 11:42:22 compute-0 sudo[109278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:42:22 compute-0 podman[109210]: 2025-11-26 11:42:22.602608812 +0000 UTC m=+0.016293186 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:42:22 compute-0 python3.9[109280]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 11:42:23 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Nov 26 11:42:23 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Nov 26 11:42:23 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Nov 26 11:42:23 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 108 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=108) [0]/[2] r=0 lpr=108 pi=[66,108)/1 crt=44'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:42:23 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 108 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=108) [0]/[2] r=0 lpr=108 pi=[66,108)/1 crt=44'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 11:42:23 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 108 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=108) [0]/[2] r=-1 lpr=108 pi=[66,108)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:42:23 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 108 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=108) [0]/[2] r=-1 lpr=108 pi=[66,108)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 11:42:23 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v210: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:42:23 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 26 11:42:23 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 11:42:23 compute-0 vigorous_moser[109246]: --> passed data devices: 0 physical, 3 LVM
Nov 26 11:42:23 compute-0 vigorous_moser[109246]: --> relative data size: 1.0
Nov 26 11:42:23 compute-0 vigorous_moser[109246]: --> All data devices are unavailable
Nov 26 11:42:23 compute-0 systemd[1]: libpod-556913c591b0d9e2162f1528121ef4c7c94fbd1ec0c34ff2fbc1f7f19d0c0c54.scope: Deactivated successfully.
Nov 26 11:42:23 compute-0 podman[109210]: 2025-11-26 11:42:23.477268484 +0000 UTC m=+0.890952837 container died 556913c591b0d9e2162f1528121ef4c7c94fbd1ec0c34ff2fbc1f7f19d0c0c54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:42:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba7f9f06d9e93d3d0cd6b662e7fbbfa9bfde0049c98be63eac38fa9cb89361d7-merged.mount: Deactivated successfully.
Nov 26 11:42:23 compute-0 podman[109210]: 2025-11-26 11:42:23.509032797 +0000 UTC m=+0.922717151 container remove 556913c591b0d9e2162f1528121ef4c7c94fbd1ec0c34ff2fbc1f7f19d0c0c54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_moser, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 26 11:42:23 compute-0 systemd[1]: libpod-conmon-556913c591b0d9e2162f1528121ef4c7c94fbd1ec0c34ff2fbc1f7f19d0c0c54.scope: Deactivated successfully.
Nov 26 11:42:23 compute-0 sudo[109018]: pam_unix(sudo:session): session closed for user root
Nov 26 11:42:23 compute-0 sudo[109315]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:42:23 compute-0 sudo[109315]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:42:23 compute-0 sudo[109315]: pam_unix(sudo:session): session closed for user root
Nov 26 11:42:23 compute-0 sudo[109340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:42:23 compute-0 sudo[109340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:42:23 compute-0 sudo[109340]: pam_unix(sudo:session): session closed for user root
Nov 26 11:42:23 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 7.d deep-scrub starts
Nov 26 11:42:23 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 7.d deep-scrub ok
Nov 26 11:42:23 compute-0 sudo[109365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:42:23 compute-0 sudo[109365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:42:23 compute-0 sudo[109365]: pam_unix(sudo:session): session closed for user root
Nov 26 11:42:23 compute-0 sudo[109390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -- lvm list --format json
Nov 26 11:42:23 compute-0 sudo[109390]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:42:23 compute-0 sudo[109278]: pam_unix(sudo:session): session closed for user root
Nov 26 11:42:23 compute-0 podman[109446]: 2025-11-26 11:42:23.911285033 +0000 UTC m=+0.029454265 container create 9cecd552e6970cd5aa6162c9fb4cbf7051818b477b00c072eda4c5046ffabe75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Nov 26 11:42:23 compute-0 systemd[1]: Started libpod-conmon-9cecd552e6970cd5aa6162c9fb4cbf7051818b477b00c072eda4c5046ffabe75.scope.
Nov 26 11:42:23 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:42:23 compute-0 podman[109446]: 2025-11-26 11:42:23.954656108 +0000 UTC m=+0.072825361 container init 9cecd552e6970cd5aa6162c9fb4cbf7051818b477b00c072eda4c5046ffabe75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_spence, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:42:23 compute-0 podman[109446]: 2025-11-26 11:42:23.958769886 +0000 UTC m=+0.076939120 container start 9cecd552e6970cd5aa6162c9fb4cbf7051818b477b00c072eda4c5046ffabe75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_spence, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:42:23 compute-0 podman[109446]: 2025-11-26 11:42:23.960208508 +0000 UTC m=+0.078377741 container attach 9cecd552e6970cd5aa6162c9fb4cbf7051818b477b00c072eda4c5046ffabe75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 26 11:42:23 compute-0 stupefied_spence[109477]: 167 167
Nov 26 11:42:23 compute-0 systemd[1]: libpod-9cecd552e6970cd5aa6162c9fb4cbf7051818b477b00c072eda4c5046ffabe75.scope: Deactivated successfully.
Nov 26 11:42:23 compute-0 podman[109446]: 2025-11-26 11:42:23.962451283 +0000 UTC m=+0.080620516 container died 9cecd552e6970cd5aa6162c9fb4cbf7051818b477b00c072eda4c5046ffabe75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 26 11:42:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-6cd2654b1c5124d292de3535ecd25a4522a372aaf7103a6a3802b19fe1f4ab9a-merged.mount: Deactivated successfully.
Nov 26 11:42:23 compute-0 podman[109446]: 2025-11-26 11:42:23.983429493 +0000 UTC m=+0.101598726 container remove 9cecd552e6970cd5aa6162c9fb4cbf7051818b477b00c072eda4c5046ffabe75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 26 11:42:23 compute-0 podman[109446]: 2025-11-26 11:42:23.900170786 +0000 UTC m=+0.018340039 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:42:23 compute-0 systemd[1]: libpod-conmon-9cecd552e6970cd5aa6162c9fb4cbf7051818b477b00c072eda4c5046ffabe75.scope: Deactivated successfully.
Nov 26 11:42:24 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Nov 26 11:42:24 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 11:42:24 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Nov 26 11:42:24 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Nov 26 11:42:24 compute-0 ceph-mon[74928]: 7.b scrub starts
Nov 26 11:42:24 compute-0 ceph-mon[74928]: 7.b scrub ok
Nov 26 11:42:24 compute-0 ceph-mon[74928]: osdmap e108: 3 total, 3 up, 3 in
Nov 26 11:42:24 compute-0 ceph-mon[74928]: pgmap v210: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:42:24 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 11:42:24 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 109 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=109 pruub=10.064750671s) [1] r=-1 lpr=109 pi=[66,109)/1 crt=44'389 mlcod 0'0 active pruub 185.794952393s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:42:24 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 109 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=109 pruub=10.064422607s) [1] r=-1 lpr=109 pi=[66,109)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 185.794952393s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:42:24 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 109 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=108/109 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=108) [0]/[2] async=[0] r=0 lpr=108 pi=[66,108)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:42:24 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 109 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=109) [1] r=0 lpr=109 pi=[66,109)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:42:24 compute-0 podman[109551]: 2025-11-26 11:42:24.09954063 +0000 UTC m=+0.029064221 container create 19b2cbdee0365747ac61055dc33f2ea621a98dde02d56654404e2e3d2cff454a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_jepsen, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 26 11:42:24 compute-0 systemd[1]: Started libpod-conmon-19b2cbdee0365747ac61055dc33f2ea621a98dde02d56654404e2e3d2cff454a.scope.
Nov 26 11:42:24 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:42:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b821b14430e01dc29840301ff17454035e7b3d3b605f2805434f5415148819d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:42:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b821b14430e01dc29840301ff17454035e7b3d3b605f2805434f5415148819d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:42:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b821b14430e01dc29840301ff17454035e7b3d3b605f2805434f5415148819d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:42:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b821b14430e01dc29840301ff17454035e7b3d3b605f2805434f5415148819d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:42:24 compute-0 podman[109551]: 2025-11-26 11:42:24.152404858 +0000 UTC m=+0.081928468 container init 19b2cbdee0365747ac61055dc33f2ea621a98dde02d56654404e2e3d2cff454a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_jepsen, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:42:24 compute-0 podman[109551]: 2025-11-26 11:42:24.158782692 +0000 UTC m=+0.088306282 container start 19b2cbdee0365747ac61055dc33f2ea621a98dde02d56654404e2e3d2cff454a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_jepsen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:42:24 compute-0 podman[109551]: 2025-11-26 11:42:24.159905853 +0000 UTC m=+0.089429443 container attach 19b2cbdee0365747ac61055dc33f2ea621a98dde02d56654404e2e3d2cff454a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 26 11:42:24 compute-0 podman[109551]: 2025-11-26 11:42:24.087505093 +0000 UTC m=+0.017028703 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:42:24 compute-0 sudo[109651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbtifhblpfnlybooikxeraomdzwvumjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157344.022887-288-245984110302900/AnsiballZ_file.py'
Nov 26 11:42:24 compute-0 sudo[109651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:42:24 compute-0 python3.9[109653]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:42:24 compute-0 sudo[109651]: pam_unix(sudo:session): session closed for user root
Nov 26 11:42:24 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Nov 26 11:42:24 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Nov 26 11:42:24 compute-0 sudo[109805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eirasalfgvdyphbtcwgrfqfhkxmklxbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157344.5304494-296-8384544774757/AnsiballZ_stat.py'
Nov 26 11:42:24 compute-0 sudo[109805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]: {
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:     "0": [
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:         {
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:             "devices": [
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:                 "/dev/loop3"
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:             ],
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:             "lv_name": "ceph_lv0",
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:             "lv_size": "21470642176",
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a9ad59a0-aa2e-4d92-b571-519d2d145b6a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:             "lv_uuid": "Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ",
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:             "name": "ceph_lv0",
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:             "tags": {
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:                 "ceph.block_uuid": "Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ",
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:                 "ceph.cluster_name": "ceph",
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:                 "ceph.crush_device_class": "",
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:                 "ceph.encrypted": "0",
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:                 "ceph.osd_fsid": "a9ad59a0-aa2e-4d92-b571-519d2d145b6a",
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:                 "ceph.osd_id": "0",
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:                 "ceph.type": "block",
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:                 "ceph.vdo": "0"
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:             },
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:             "type": "block",
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:             "vg_name": "ceph_vg0"
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:         }
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:     ],
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:     "1": [
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:         {
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:             "devices": [
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:                 "/dev/loop4"
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:             ],
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:             "lv_name": "ceph_lv1",
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:             "lv_size": "21470642176",
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2627095b-eef8-4027-bfef-68bf7cb6801f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:             "lv_uuid": "T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS",
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:             "name": "ceph_lv1",
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:             "tags": {
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:                 "ceph.block_uuid": "T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS",
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:                 "ceph.cluster_name": "ceph",
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:                 "ceph.crush_device_class": "",
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:                 "ceph.encrypted": "0",
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:                 "ceph.osd_fsid": "2627095b-eef8-4027-bfef-68bf7cb6801f",
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:                 "ceph.osd_id": "1",
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:                 "ceph.type": "block",
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:                 "ceph.vdo": "0"
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:             },
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:             "type": "block",
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:             "vg_name": "ceph_vg1"
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:         }
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:     ],
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:     "2": [
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:         {
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:             "devices": [
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:                 "/dev/loop5"
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:             ],
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:             "lv_name": "ceph_lv2",
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:             "lv_size": "21470642176",
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d56156fb-7361-4bef-b06b-1320109b4323,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:             "lv_uuid": "PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF",
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:             "name": "ceph_lv2",
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:             "tags": {
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:                 "ceph.block_uuid": "PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF",
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:                 "ceph.cluster_name": "ceph",
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:                 "ceph.crush_device_class": "",
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:                 "ceph.encrypted": "0",
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:                 "ceph.osd_fsid": "d56156fb-7361-4bef-b06b-1320109b4323",
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:                 "ceph.osd_id": "2",
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:                 "ceph.type": "block",
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:                 "ceph.vdo": "0"
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:             },
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:             "type": "block",
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:             "vg_name": "ceph_vg2"
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:         }
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]:     ]
Nov 26 11:42:24 compute-0 blissful_jepsen[109594]: }
Nov 26 11:42:24 compute-0 systemd[1]: libpod-19b2cbdee0365747ac61055dc33f2ea621a98dde02d56654404e2e3d2cff454a.scope: Deactivated successfully.
Nov 26 11:42:24 compute-0 podman[109810]: 2025-11-26 11:42:24.821334251 +0000 UTC m=+0.016155271 container died 19b2cbdee0365747ac61055dc33f2ea621a98dde02d56654404e2e3d2cff454a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_jepsen, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:42:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b821b14430e01dc29840301ff17454035e7b3d3b605f2805434f5415148819d-merged.mount: Deactivated successfully.
Nov 26 11:42:24 compute-0 podman[109810]: 2025-11-26 11:42:24.850817778 +0000 UTC m=+0.045638798 container remove 19b2cbdee0365747ac61055dc33f2ea621a98dde02d56654404e2e3d2cff454a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_jepsen, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:42:24 compute-0 systemd[1]: libpod-conmon-19b2cbdee0365747ac61055dc33f2ea621a98dde02d56654404e2e3d2cff454a.scope: Deactivated successfully.
Nov 26 11:42:24 compute-0 python3.9[109807]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:42:24 compute-0 sudo[109390]: pam_unix(sudo:session): session closed for user root
Nov 26 11:42:24 compute-0 sudo[109805]: pam_unix(sudo:session): session closed for user root
Nov 26 11:42:24 compute-0 sudo[109824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:42:24 compute-0 sudo[109824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:42:24 compute-0 sudo[109824]: pam_unix(sudo:session): session closed for user root
Nov 26 11:42:24 compute-0 sudo[109855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:42:24 compute-0 sudo[109855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:42:24 compute-0 sudo[109855]: pam_unix(sudo:session): session closed for user root
Nov 26 11:42:25 compute-0 sudo[109898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:42:25 compute-0 sudo[109898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:42:25 compute-0 sudo[109898]: pam_unix(sudo:session): session closed for user root
Nov 26 11:42:25 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Nov 26 11:42:25 compute-0 sudo[109948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -- raw list --format json
Nov 26 11:42:25 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Nov 26 11:42:25 compute-0 sudo[109948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:42:25 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Nov 26 11:42:25 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 110 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=110) [1]/[2] r=-1 lpr=110 pi=[66,110)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:42:25 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 110 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=110) [1]/[2] r=-1 lpr=110 pi=[66,110)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 11:42:25 compute-0 ceph-mon[74928]: 7.d deep-scrub starts
Nov 26 11:42:25 compute-0 ceph-mon[74928]: 7.d deep-scrub ok
Nov 26 11:42:25 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 11:42:25 compute-0 ceph-mon[74928]: osdmap e109: 3 total, 3 up, 3 in
Nov 26 11:42:25 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 110 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=110) [1]/[2] r=0 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:42:25 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 110 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=110) [1]/[2] r=0 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 11:42:25 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=108/109 n=5 ec=45/34 lis/c=108/66 les/c/f=109/67/0 sis=110 pruub=14.995377541s) [0] async=[0] r=-1 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 44'389 active pruub 191.732009888s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:42:25 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=108/109 n=5 ec=45/34 lis/c=108/66 les/c/f=109/67/0 sis=110 pruub=14.995049477s) [0] r=-1 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 191.732009888s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:42:25 compute-0 sudo[109995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahzfneyghfldkdvyteowkfnkcprbbbsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157344.5304494-296-8384544774757/AnsiballZ_file.py'
Nov 26 11:42:25 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=108/66 les/c/f=109/67/0 sis=110) [0] r=0 lpr=110 pi=[66,110)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:42:25 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=108/66 les/c/f=109/67/0 sis=110) [0] r=0 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:42:25 compute-0 sudo[109995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:42:25 compute-0 python3.9[109999]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:42:25 compute-0 sudo[109995]: pam_unix(sudo:session): session closed for user root
Nov 26 11:42:25 compute-0 podman[110032]: 2025-11-26 11:42:25.277329535 +0000 UTC m=+0.026738092 container create 3403daad2b13350a34d5db245de20fc9f8cdbfad854934460093a39a6eb56825 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_swirles, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:42:25 compute-0 systemd[1]: Started libpod-conmon-3403daad2b13350a34d5db245de20fc9f8cdbfad854934460093a39a6eb56825.scope.
Nov 26 11:42:25 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:42:25 compute-0 podman[110032]: 2025-11-26 11:42:25.32970808 +0000 UTC m=+0.079116657 container init 3403daad2b13350a34d5db245de20fc9f8cdbfad854934460093a39a6eb56825 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_swirles, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:42:25 compute-0 podman[110032]: 2025-11-26 11:42:25.334141406 +0000 UTC m=+0.083549963 container start 3403daad2b13350a34d5db245de20fc9f8cdbfad854934460093a39a6eb56825 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_swirles, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 26 11:42:25 compute-0 podman[110032]: 2025-11-26 11:42:25.33541822 +0000 UTC m=+0.084826796 container attach 3403daad2b13350a34d5db245de20fc9f8cdbfad854934460093a39a6eb56825 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:42:25 compute-0 thirsty_swirles[110068]: 167 167
Nov 26 11:42:25 compute-0 systemd[1]: libpod-3403daad2b13350a34d5db245de20fc9f8cdbfad854934460093a39a6eb56825.scope: Deactivated successfully.
Nov 26 11:42:25 compute-0 podman[110032]: 2025-11-26 11:42:25.337866182 +0000 UTC m=+0.087274738 container died 3403daad2b13350a34d5db245de20fc9f8cdbfad854934460093a39a6eb56825 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_swirles, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:42:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-4cb39a84c47a6781be50beb4800d13e621f063cdc5f7b777e814cdd424ce247f-merged.mount: Deactivated successfully.
Nov 26 11:42:25 compute-0 podman[110032]: 2025-11-26 11:42:25.354053799 +0000 UTC m=+0.103462356 container remove 3403daad2b13350a34d5db245de20fc9f8cdbfad854934460093a39a6eb56825 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_swirles, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:42:25 compute-0 podman[110032]: 2025-11-26 11:42:25.266322565 +0000 UTC m=+0.015731141 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:42:25 compute-0 systemd[1]: libpod-conmon-3403daad2b13350a34d5db245de20fc9f8cdbfad854934460093a39a6eb56825.scope: Deactivated successfully.
Nov 26 11:42:25 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v213: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:42:25 compute-0 podman[110142]: 2025-11-26 11:42:25.46878756 +0000 UTC m=+0.030519711 container create d5ab45a6726767771e04d3c8a80c9996b9d098a5c4032d8fba7d3c4cb91623a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_feynman, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:42:25 compute-0 systemd[1]: Started libpod-conmon-d5ab45a6726767771e04d3c8a80c9996b9d098a5c4032d8fba7d3c4cb91623a1.scope.
Nov 26 11:42:25 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:42:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a68ebd9262085365b02260d3e9c9309b0c3165371f74e7f1b2e0346d036b4731/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:42:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a68ebd9262085365b02260d3e9c9309b0c3165371f74e7f1b2e0346d036b4731/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:42:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a68ebd9262085365b02260d3e9c9309b0c3165371f74e7f1b2e0346d036b4731/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:42:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a68ebd9262085365b02260d3e9c9309b0c3165371f74e7f1b2e0346d036b4731/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:42:25 compute-0 podman[110142]: 2025-11-26 11:42:25.527743336 +0000 UTC m=+0.089475508 container init d5ab45a6726767771e04d3c8a80c9996b9d098a5c4032d8fba7d3c4cb91623a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_feynman, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 26 11:42:25 compute-0 podman[110142]: 2025-11-26 11:42:25.534107425 +0000 UTC m=+0.095839586 container start d5ab45a6726767771e04d3c8a80c9996b9d098a5c4032d8fba7d3c4cb91623a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_feynman, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 26 11:42:25 compute-0 podman[110142]: 2025-11-26 11:42:25.535298501 +0000 UTC m=+0.097030663 container attach d5ab45a6726767771e04d3c8a80c9996b9d098a5c4032d8fba7d3c4cb91623a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_feynman, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:42:25 compute-0 podman[110142]: 2025-11-26 11:42:25.456018776 +0000 UTC m=+0.017750947 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:42:25 compute-0 sudo[110233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgcxvqufeqpimtgwnyeilvwsvpbtnaiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157345.381811-309-184026772532664/AnsiballZ_stat.py'
Nov 26 11:42:25 compute-0 sudo[110233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:42:25 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Nov 26 11:42:25 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Nov 26 11:42:25 compute-0 python3.9[110235]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:42:25 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Nov 26 11:42:25 compute-0 sudo[110233]: pam_unix(sudo:session): session closed for user root
Nov 26 11:42:25 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Nov 26 11:42:25 compute-0 sudo[110311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhcezxixqfimkqiticqepjleekyqxpgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157345.381811-309-184026772532664/AnsiballZ_file.py'
Nov 26 11:42:25 compute-0 sudo[110311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:42:26 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Nov 26 11:42:26 compute-0 ceph-mon[74928]: 7.10 scrub starts
Nov 26 11:42:26 compute-0 ceph-mon[74928]: 7.10 scrub ok
Nov 26 11:42:26 compute-0 ceph-mon[74928]: osdmap e110: 3 total, 3 up, 3 in
Nov 26 11:42:26 compute-0 ceph-mon[74928]: pgmap v213: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:42:26 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Nov 26 11:42:26 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Nov 26 11:42:26 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 111 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=108/66 les/c/f=109/67/0 sis=110) [0] r=0 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:42:26 compute-0 python3.9[110313]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:42:26 compute-0 sudo[110311]: pam_unix(sudo:session): session closed for user root
Nov 26 11:42:26 compute-0 elastic_feynman[110178]: {
Nov 26 11:42:26 compute-0 elastic_feynman[110178]:     "2627095b-eef8-4027-bfef-68bf7cb6801f": {
Nov 26 11:42:26 compute-0 elastic_feynman[110178]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:42:26 compute-0 elastic_feynman[110178]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 11:42:26 compute-0 elastic_feynman[110178]:         "osd_id": 1,
Nov 26 11:42:26 compute-0 elastic_feynman[110178]:         "osd_uuid": "2627095b-eef8-4027-bfef-68bf7cb6801f",
Nov 26 11:42:26 compute-0 elastic_feynman[110178]:         "type": "bluestore"
Nov 26 11:42:26 compute-0 elastic_feynman[110178]:     },
Nov 26 11:42:26 compute-0 elastic_feynman[110178]:     "a9ad59a0-aa2e-4d92-b571-519d2d145b6a": {
Nov 26 11:42:26 compute-0 elastic_feynman[110178]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:42:26 compute-0 elastic_feynman[110178]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 11:42:26 compute-0 elastic_feynman[110178]:         "osd_id": 0,
Nov 26 11:42:26 compute-0 elastic_feynman[110178]:         "osd_uuid": "a9ad59a0-aa2e-4d92-b571-519d2d145b6a",
Nov 26 11:42:26 compute-0 elastic_feynman[110178]:         "type": "bluestore"
Nov 26 11:42:26 compute-0 elastic_feynman[110178]:     },
Nov 26 11:42:26 compute-0 elastic_feynman[110178]:     "d56156fb-7361-4bef-b06b-1320109b4323": {
Nov 26 11:42:26 compute-0 elastic_feynman[110178]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:42:26 compute-0 elastic_feynman[110178]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 11:42:26 compute-0 elastic_feynman[110178]:         "osd_id": 2,
Nov 26 11:42:26 compute-0 elastic_feynman[110178]:         "osd_uuid": "d56156fb-7361-4bef-b06b-1320109b4323",
Nov 26 11:42:26 compute-0 elastic_feynman[110178]:         "type": "bluestore"
Nov 26 11:42:26 compute-0 elastic_feynman[110178]:     }
Nov 26 11:42:26 compute-0 elastic_feynman[110178]: }
Nov 26 11:42:26 compute-0 systemd[1]: libpod-d5ab45a6726767771e04d3c8a80c9996b9d098a5c4032d8fba7d3c4cb91623a1.scope: Deactivated successfully.
Nov 26 11:42:26 compute-0 podman[110366]: 2025-11-26 11:42:26.326575376 +0000 UTC m=+0.017087148 container died d5ab45a6726767771e04d3c8a80c9996b9d098a5c4032d8fba7d3c4cb91623a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 26 11:42:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-a68ebd9262085365b02260d3e9c9309b0c3165371f74e7f1b2e0346d036b4731-merged.mount: Deactivated successfully.
Nov 26 11:42:26 compute-0 podman[110366]: 2025-11-26 11:42:26.353346467 +0000 UTC m=+0.043858229 container remove d5ab45a6726767771e04d3c8a80c9996b9d098a5c4032d8fba7d3c4cb91623a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_feynman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Nov 26 11:42:26 compute-0 systemd[1]: libpod-conmon-d5ab45a6726767771e04d3c8a80c9996b9d098a5c4032d8fba7d3c4cb91623a1.scope: Deactivated successfully.
Nov 26 11:42:26 compute-0 sudo[109948]: pam_unix(sudo:session): session closed for user root
Nov 26 11:42:26 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 11:42:26 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:42:26 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 11:42:26 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:42:26 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 92dfd27f-b850-4f0b-96d1-6d63588febcd does not exist
Nov 26 11:42:26 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 1dfab9da-df42-4fb9-adbb-3f38bb52f92a does not exist
Nov 26 11:42:26 compute-0 sudo[110378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:42:26 compute-0 sudo[110378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:42:26 compute-0 sudo[110378]: pam_unix(sudo:session): session closed for user root
Nov 26 11:42:26 compute-0 sudo[110403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 26 11:42:26 compute-0 sudo[110403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:42:26 compute-0 sudo[110403]: pam_unix(sudo:session): session closed for user root
Nov 26 11:42:26 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:42:26 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 111 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=110) [1]/[2] async=[1] r=0 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:42:26 compute-0 sudo[110553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbwlfsimxalvjqfqfpinuifhihxuysss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157346.4729047-324-30162845588295/AnsiballZ_dnf.py'
Nov 26 11:42:26 compute-0 sudo[110553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:42:26 compute-0 python3.9[110555]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 11:42:27 compute-0 ceph-mon[74928]: 7.12 scrub starts
Nov 26 11:42:27 compute-0 ceph-mon[74928]: 7.12 scrub ok
Nov 26 11:42:27 compute-0 ceph-mon[74928]: 7.1 scrub starts
Nov 26 11:42:27 compute-0 ceph-mon[74928]: 7.1 scrub ok
Nov 26 11:42:27 compute-0 ceph-mon[74928]: osdmap e111: 3 total, 3 up, 3 in
Nov 26 11:42:27 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:42:27 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:42:27 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Nov 26 11:42:27 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Nov 26 11:42:27 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Nov 26 11:42:27 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=110/66 les/c/f=111/67/0 sis=112) [1] r=0 lpr=112 pi=[66,112)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:42:27 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=110/66 les/c/f=111/67/0 sis=112) [1] r=0 lpr=112 pi=[66,112)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:42:27 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/66 les/c/f=111/67/0 sis=112 pruub=15.218453407s) [1] async=[1] r=-1 lpr=112 pi=[66,112)/1 crt=44'389 mlcod 44'389 active pruub 194.299545288s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:42:27 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/66 les/c/f=111/67/0 sis=112 pruub=15.218302727s) [1] r=-1 lpr=112 pi=[66,112)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 194.299545288s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:42:27 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v216: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 1 objects/s recovering
Nov 26 11:42:27 compute-0 sudo[110553]: pam_unix(sudo:session): session closed for user root
Nov 26 11:42:28 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Nov 26 11:42:28 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Nov 26 11:42:28 compute-0 ceph-mon[74928]: osdmap e112: 3 total, 3 up, 3 in
Nov 26 11:42:28 compute-0 ceph-mon[74928]: pgmap v216: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 1 objects/s recovering
Nov 26 11:42:28 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Nov 26 11:42:28 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 113 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=110/66 les/c/f=111/67/0 sis=112) [1] r=0 lpr=112 pi=[66,112)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:42:28 compute-0 python3.9[110706]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 11:42:28 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Nov 26 11:42:28 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Nov 26 11:42:28 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Nov 26 11:42:28 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Nov 26 11:42:29 compute-0 python3.9[110858]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 26 11:42:29 compute-0 ceph-mon[74928]: osdmap e113: 3 total, 3 up, 3 in
Nov 26 11:42:29 compute-0 ceph-mon[74928]: 7.3 scrub starts
Nov 26 11:42:29 compute-0 ceph-mon[74928]: 7.3 scrub ok
Nov 26 11:42:29 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v218: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 25 B/s, 2 objects/s recovering
Nov 26 11:42:29 compute-0 python3.9[111008]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 11:42:29 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Nov 26 11:42:29 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Nov 26 11:42:30 compute-0 sudo[111158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwpbwkkonouvyslvbuedqncppusopdzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157349.722263-365-84864312844694/AnsiballZ_systemd.py'
Nov 26 11:42:30 compute-0 sudo[111158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:42:30 compute-0 ceph-mon[74928]: 7.14 scrub starts
Nov 26 11:42:30 compute-0 ceph-mon[74928]: 7.14 scrub ok
Nov 26 11:42:30 compute-0 ceph-mon[74928]: pgmap v218: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 25 B/s, 2 objects/s recovering
Nov 26 11:42:30 compute-0 ceph-mon[74928]: 11.1 scrub starts
Nov 26 11:42:30 compute-0 ceph-mon[74928]: 11.1 scrub ok
Nov 26 11:42:30 compute-0 python3.9[111160]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 11:42:30 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 26 11:42:30 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Nov 26 11:42:30 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 26 11:42:30 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 26 11:42:30 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 26 11:42:30 compute-0 sudo[111158]: pam_unix(sudo:session): session closed for user root
Nov 26 11:42:31 compute-0 python3.9[111321]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 26 11:42:31 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v219: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Nov 26 11:42:31 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:42:31 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Nov 26 11:42:31 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Nov 26 11:42:31 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Nov 26 11:42:31 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Nov 26 11:42:32 compute-0 ceph-mon[74928]: pgmap v219: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Nov 26 11:42:32 compute-0 ceph-mon[74928]: 11.8 scrub starts
Nov 26 11:42:32 compute-0 ceph-mon[74928]: 11.8 scrub ok
Nov 26 11:42:32 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Nov 26 11:42:32 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Nov 26 11:42:32 compute-0 sudo[111471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hynexityxegrztckangfeosvfmbfmclc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157352.7254393-422-259444020356119/AnsiballZ_systemd.py'
Nov 26 11:42:32 compute-0 sudo[111471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:42:33 compute-0 python3.9[111473]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 11:42:33 compute-0 sudo[111471]: pam_unix(sudo:session): session closed for user root
Nov 26 11:42:33 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v220: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 14 B/s, 0 objects/s recovering
Nov 26 11:42:33 compute-0 ceph-mon[74928]: 7.16 scrub starts
Nov 26 11:42:33 compute-0 ceph-mon[74928]: 7.16 scrub ok
Nov 26 11:42:33 compute-0 sudo[111625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xklhyvmmvqmzeznoqybgdvejkwrqlkxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157353.3036542-422-56842732278156/AnsiballZ_systemd.py'
Nov 26 11:42:33 compute-0 sudo[111625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:42:33 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Nov 26 11:42:33 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Nov 26 11:42:33 compute-0 python3.9[111627]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 11:42:33 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 8.c scrub starts
Nov 26 11:42:33 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 8.c scrub ok
Nov 26 11:42:33 compute-0 sudo[111625]: pam_unix(sudo:session): session closed for user root
Nov 26 11:42:33 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Nov 26 11:42:33 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Nov 26 11:42:34 compute-0 sshd-session[104709]: Connection closed by 192.168.122.30 port 32924
Nov 26 11:42:34 compute-0 sshd-session[104706]: pam_unix(sshd:session): session closed for user zuul
Nov 26 11:42:34 compute-0 systemd[1]: session-34.scope: Deactivated successfully.
Nov 26 11:42:34 compute-0 systemd[1]: session-34.scope: Consumed 46.083s CPU time.
Nov 26 11:42:34 compute-0 systemd-logind[744]: Session 34 logged out. Waiting for processes to exit.
Nov 26 11:42:34 compute-0 systemd-logind[744]: Removed session 34.
Nov 26 11:42:34 compute-0 ceph-mon[74928]: 7.17 scrub starts
Nov 26 11:42:34 compute-0 ceph-mon[74928]: 7.17 scrub ok
Nov 26 11:42:34 compute-0 ceph-mon[74928]: pgmap v220: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 14 B/s, 0 objects/s recovering
Nov 26 11:42:34 compute-0 ceph-mon[74928]: 8.c scrub starts
Nov 26 11:42:34 compute-0 ceph-mon[74928]: 8.c scrub ok
Nov 26 11:42:34 compute-0 ceph-mon[74928]: 7.2 scrub starts
Nov 26 11:42:34 compute-0 ceph-mon[74928]: 7.2 scrub ok
Nov 26 11:42:34 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Nov 26 11:42:34 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Nov 26 11:42:35 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v221: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 0 objects/s recovering
Nov 26 11:42:35 compute-0 ceph-mon[74928]: 7.19 scrub starts
Nov 26 11:42:35 compute-0 ceph-mon[74928]: 7.19 scrub ok
Nov 26 11:42:35 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 8.e scrub starts
Nov 26 11:42:35 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 8.e scrub ok
Nov 26 11:42:35 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 8.d scrub starts
Nov 26 11:42:35 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 8.d scrub ok
Nov 26 11:42:36 compute-0 ceph-mon[74928]: 7.1d scrub starts
Nov 26 11:42:36 compute-0 ceph-mon[74928]: 7.1d scrub ok
Nov 26 11:42:36 compute-0 ceph-mon[74928]: pgmap v221: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 0 objects/s recovering
Nov 26 11:42:36 compute-0 ceph-mon[74928]: 8.e scrub starts
Nov 26 11:42:36 compute-0 ceph-mon[74928]: 8.e scrub ok
Nov 26 11:42:36 compute-0 ceph-mon[74928]: 8.d scrub starts
Nov 26 11:42:36 compute-0 ceph-mon[74928]: 8.d scrub ok
Nov 26 11:42:36 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:42:37 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v222: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 10 B/s, 0 objects/s recovering
Nov 26 11:42:37 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Nov 26 11:42:37 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Nov 26 11:42:38 compute-0 ceph-mon[74928]: pgmap v222: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 10 B/s, 0 objects/s recovering
Nov 26 11:42:38 compute-0 ceph-mon[74928]: 11.9 scrub starts
Nov 26 11:42:38 compute-0 ceph-mon[74928]: 11.9 scrub ok
Nov 26 11:42:38 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Nov 26 11:42:38 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Nov 26 11:42:39 compute-0 sshd-session[111654]: Accepted publickey for zuul from 192.168.122.30 port 33142 ssh2: ECDSA SHA256:Ri1FttUDYah1mI7cQP4QF8tE4Jqe/KzhJvhCRDggR5A
Nov 26 11:42:39 compute-0 systemd-logind[744]: New session 35 of user zuul.
Nov 26 11:42:39 compute-0 systemd[1]: Started Session 35 of User zuul.
Nov 26 11:42:39 compute-0 sshd-session[111654]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 26 11:42:39 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v223: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Nov 26 11:42:39 compute-0 ceph-mon[74928]: 7.5 scrub starts
Nov 26 11:42:39 compute-0 ceph-mon[74928]: 7.5 scrub ok
Nov 26 11:42:39 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Nov 26 11:42:39 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Nov 26 11:42:39 compute-0 python3.9[111807]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 11:42:39 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Nov 26 11:42:39 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Nov 26 11:42:40 compute-0 ceph-mon[74928]: pgmap v223: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Nov 26 11:42:40 compute-0 ceph-mon[74928]: 7.8 scrub starts
Nov 26 11:42:40 compute-0 ceph-mon[74928]: 7.8 scrub ok
Nov 26 11:42:40 compute-0 sudo[111961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfxubihlbcspjzjxcqznaqidrtejlyek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157360.2299683-36-62282494068592/AnsiballZ_getent.py'
Nov 26 11:42:40 compute-0 sudo[111961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:42:40 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Nov 26 11:42:40 compute-0 python3.9[111963]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 26 11:42:40 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Nov 26 11:42:40 compute-0 sudo[111961]: pam_unix(sudo:session): session closed for user root
Nov 26 11:42:41 compute-0 sudo[112114]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-piwdrsrklmmrybrlaaobomaedsoagrcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157361.0254817-48-234433383219891/AnsiballZ_setup.py'
Nov 26 11:42:41 compute-0 sudo[112114]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:42:41 compute-0 ceph-mgr[75197]: [balancer INFO root] Optimize plan auto_2025-11-26_11:42:41
Nov 26 11:42:41 compute-0 ceph-mgr[75197]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 11:42:41 compute-0 ceph-mgr[75197]: [balancer INFO root] do_upmap
Nov 26 11:42:41 compute-0 ceph-mgr[75197]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'images', 'volumes', 'backups', 'default.rgw.log', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'vms']
Nov 26 11:42:41 compute-0 ceph-mgr[75197]: [balancer INFO root] prepared 0/10 changes
Nov 26 11:42:41 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v224: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:42:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:42:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:42:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:42:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:42:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:42:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:42:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 11:42:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 11:42:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 11:42:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 11:42:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 11:42:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 11:42:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 11:42:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 11:42:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 11:42:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 11:42:41 compute-0 python3.9[112116]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 11:42:41 compute-0 ceph-mon[74928]: 7.1e scrub starts
Nov 26 11:42:41 compute-0 ceph-mon[74928]: 7.1e scrub ok
Nov 26 11:42:41 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:42:41 compute-0 sudo[112114]: pam_unix(sudo:session): session closed for user root
Nov 26 11:42:41 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Nov 26 11:42:41 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Nov 26 11:42:41 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 8.4 deep-scrub starts
Nov 26 11:42:41 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 8.4 deep-scrub ok
Nov 26 11:42:41 compute-0 sudo[112198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afyfwpdtrbhrtvfvlwaxtjlxydchmjtm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157361.0254817-48-234433383219891/AnsiballZ_dnf.py'
Nov 26 11:42:41 compute-0 sudo[112198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:42:42 compute-0 python3.9[112200]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 26 11:42:42 compute-0 ceph-mon[74928]: 8.1 scrub starts
Nov 26 11:42:42 compute-0 ceph-mon[74928]: 8.1 scrub ok
Nov 26 11:42:42 compute-0 ceph-mon[74928]: pgmap v224: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:42:42 compute-0 ceph-mon[74928]: 8.4 deep-scrub starts
Nov 26 11:42:42 compute-0 ceph-mon[74928]: 8.4 deep-scrub ok
Nov 26 11:42:43 compute-0 sudo[112198]: pam_unix(sudo:session): session closed for user root
Nov 26 11:42:43 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v225: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:42:43 compute-0 sudo[112351]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgzmtlccaslmjmzsaqocaerqzllzulwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157363.2886183-62-252676970825911/AnsiballZ_dnf.py'
Nov 26 11:42:43 compute-0 sudo[112351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:42:43 compute-0 ceph-mon[74928]: 8.3 scrub starts
Nov 26 11:42:43 compute-0 ceph-mon[74928]: 8.3 scrub ok
Nov 26 11:42:43 compute-0 python3.9[112353]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 11:42:44 compute-0 ceph-mon[74928]: pgmap v225: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:42:44 compute-0 sudo[112351]: pam_unix(sudo:session): session closed for user root
Nov 26 11:42:45 compute-0 sudo[112504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzfuewhncuysoeqrchmrfmlkajphpuaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157364.7685928-70-6904337290931/AnsiballZ_systemd.py'
Nov 26 11:42:45 compute-0 sudo[112504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:42:45 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v226: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:42:45 compute-0 python3.9[112506]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 26 11:42:45 compute-0 sudo[112504]: pam_unix(sudo:session): session closed for user root
Nov 26 11:42:45 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 8.f scrub starts
Nov 26 11:42:45 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 8.f scrub ok
Nov 26 11:42:46 compute-0 python3.9[112659]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 11:42:46 compute-0 ceph-mon[74928]: pgmap v226: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:42:46 compute-0 ceph-mon[74928]: 8.f scrub starts
Nov 26 11:42:46 compute-0 ceph-mon[74928]: 8.f scrub ok
Nov 26 11:42:46 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:42:46 compute-0 sudo[112809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwwwntlvlropebfsbvrppzojhpdmdgob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157366.415267-88-153132037201470/AnsiballZ_sefcontext.py'
Nov 26 11:42:46 compute-0 sudo[112809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:42:46 compute-0 python3.9[112811]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 26 11:42:47 compute-0 sudo[112809]: pam_unix(sudo:session): session closed for user root
Nov 26 11:42:47 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v227: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:42:47 compute-0 python3.9[112961]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 11:42:48 compute-0 sudo[113117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wktauamaurxqbluubyrqierblwczrxmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157367.9808583-106-243818183377814/AnsiballZ_dnf.py'
Nov 26 11:42:48 compute-0 sudo[113117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:42:48 compute-0 python3.9[113119]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 11:42:48 compute-0 ceph-mon[74928]: pgmap v227: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:42:48 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Nov 26 11:42:48 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Nov 26 11:42:49 compute-0 sudo[113117]: pam_unix(sudo:session): session closed for user root
Nov 26 11:42:49 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v228: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:42:49 compute-0 ceph-mon[74928]: 8.5 scrub starts
Nov 26 11:42:49 compute-0 ceph-mon[74928]: 8.5 scrub ok
Nov 26 11:42:49 compute-0 sudo[113270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmdgwolofiewdlfsronuiezumnstixtf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157369.4624083-114-122322465561986/AnsiballZ_command.py'
Nov 26 11:42:49 compute-0 sudo[113270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:42:49 compute-0 python3.9[113272]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:42:50 compute-0 sudo[113270]: pam_unix(sudo:session): session closed for user root
Nov 26 11:42:50 compute-0 ceph-mon[74928]: pgmap v228: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:42:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 11:42:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:42:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 11:42:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:42:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:42:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:42:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:42:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:42:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:42:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:42:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:42:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:42:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 26 11:42:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:42:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:42:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:42:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 11:42:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:42:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 11:42:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:42:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:42:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:42:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 11:42:50 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 7.a scrub starts
Nov 26 11:42:50 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 7.a scrub ok
Nov 26 11:42:50 compute-0 sudo[113557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifreeotaklrumsdyafpladppyrwwnfsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157370.6305928-122-95778220344142/AnsiballZ_file.py'
Nov 26 11:42:50 compute-0 sudo[113557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:42:51 compute-0 python3.9[113559]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 26 11:42:51 compute-0 sudo[113557]: pam_unix(sudo:session): session closed for user root
Nov 26 11:42:51 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v229: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:42:51 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:42:51 compute-0 ceph-mon[74928]: 7.a scrub starts
Nov 26 11:42:51 compute-0 ceph-mon[74928]: 7.a scrub ok
Nov 26 11:42:51 compute-0 python3.9[113709]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 11:42:52 compute-0 sudo[113861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qledovubeuhepxlthlzxtxwxxdkwajrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157371.8503437-138-98090515230297/AnsiballZ_dnf.py'
Nov 26 11:42:52 compute-0 sudo[113861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:42:52 compute-0 python3.9[113863]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 11:42:52 compute-0 ceph-mon[74928]: pgmap v229: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:42:52 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Nov 26 11:42:52 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Nov 26 11:42:53 compute-0 sudo[113861]: pam_unix(sudo:session): session closed for user root
Nov 26 11:42:53 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v230: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:42:53 compute-0 sudo[114014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-geptgkyvzceehnxjnrnugdbtlitvtbyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157373.3728745-147-12248488919702/AnsiballZ_dnf.py'
Nov 26 11:42:53 compute-0 sudo[114014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:42:53 compute-0 ceph-mon[74928]: 7.4 scrub starts
Nov 26 11:42:53 compute-0 ceph-mon[74928]: 7.4 scrub ok
Nov 26 11:42:53 compute-0 python3.9[114016]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 11:42:54 compute-0 ceph-mon[74928]: pgmap v230: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:42:54 compute-0 sudo[114014]: pam_unix(sudo:session): session closed for user root
Nov 26 11:42:55 compute-0 sudo[114167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-riscwsxfzqorezdexjnasyhxuuvpkecc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157374.971616-159-167996472725859/AnsiballZ_stat.py'
Nov 26 11:42:55 compute-0 sudo[114167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:42:55 compute-0 python3.9[114169]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 11:42:55 compute-0 sudo[114167]: pam_unix(sudo:session): session closed for user root
Nov 26 11:42:55 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v231: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:42:55 compute-0 sudo[114321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crvbxepipsuvxiwtlmwmmfqfpromgyuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157375.4113336-167-180957353022027/AnsiballZ_slurp.py'
Nov 26 11:42:55 compute-0 sudo[114321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:42:55 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Nov 26 11:42:55 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Nov 26 11:42:55 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 8.b scrub starts
Nov 26 11:42:55 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 8.b scrub ok
Nov 26 11:42:55 compute-0 python3.9[114323]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Nov 26 11:42:55 compute-0 sudo[114321]: pam_unix(sudo:session): session closed for user root
Nov 26 11:42:56 compute-0 sshd-session[111657]: Connection closed by 192.168.122.30 port 33142
Nov 26 11:42:56 compute-0 sshd-session[111654]: pam_unix(sshd:session): session closed for user zuul
Nov 26 11:42:56 compute-0 systemd[1]: session-35.scope: Deactivated successfully.
Nov 26 11:42:56 compute-0 systemd[1]: session-35.scope: Consumed 13.062s CPU time.
Nov 26 11:42:56 compute-0 systemd-logind[744]: Session 35 logged out. Waiting for processes to exit.
Nov 26 11:42:56 compute-0 systemd-logind[744]: Removed session 35.
Nov 26 11:42:56 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:42:56 compute-0 ceph-mon[74928]: pgmap v231: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:42:56 compute-0 ceph-mon[74928]: 8.7 scrub starts
Nov 26 11:42:56 compute-0 ceph-mon[74928]: 8.7 scrub ok
Nov 26 11:42:56 compute-0 ceph-mon[74928]: 8.b scrub starts
Nov 26 11:42:56 compute-0 ceph-mon[74928]: 8.b scrub ok
Nov 26 11:42:56 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Nov 26 11:42:56 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Nov 26 11:42:56 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Nov 26 11:42:56 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Nov 26 11:42:57 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v232: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:42:57 compute-0 ceph-mon[74928]: 7.15 scrub starts
Nov 26 11:42:57 compute-0 ceph-mon[74928]: 7.15 scrub ok
Nov 26 11:42:57 compute-0 ceph-mon[74928]: 8.9 scrub starts
Nov 26 11:42:57 compute-0 ceph-mon[74928]: 8.9 scrub ok
Nov 26 11:42:57 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Nov 26 11:42:57 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Nov 26 11:42:57 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 11.e scrub starts
Nov 26 11:42:57 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 11.e scrub ok
Nov 26 11:42:58 compute-0 ceph-mon[74928]: pgmap v232: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:42:58 compute-0 ceph-mon[74928]: 11.1b scrub starts
Nov 26 11:42:58 compute-0 ceph-mon[74928]: 11.1b scrub ok
Nov 26 11:42:58 compute-0 ceph-mon[74928]: 11.e scrub starts
Nov 26 11:42:58 compute-0 ceph-mon[74928]: 11.e scrub ok
Nov 26 11:42:58 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Nov 26 11:42:58 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Nov 26 11:42:58 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 7.f scrub starts
Nov 26 11:42:58 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 7.f scrub ok
Nov 26 11:42:59 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v233: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:42:59 compute-0 ceph-mon[74928]: 8.8 scrub starts
Nov 26 11:42:59 compute-0 ceph-mon[74928]: 8.8 scrub ok
Nov 26 11:42:59 compute-0 ceph-mon[74928]: 7.f scrub starts
Nov 26 11:42:59 compute-0 ceph-mon[74928]: 7.f scrub ok
Nov 26 11:42:59 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Nov 26 11:42:59 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Nov 26 11:43:00 compute-0 ceph-mon[74928]: pgmap v233: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:43:00 compute-0 ceph-mon[74928]: 8.6 scrub starts
Nov 26 11:43:00 compute-0 ceph-mon[74928]: 8.6 scrub ok
Nov 26 11:43:00 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 9.2 deep-scrub starts
Nov 26 11:43:00 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 9.2 deep-scrub ok
Nov 26 11:43:01 compute-0 sshd-session[114348]: Accepted publickey for zuul from 192.168.122.30 port 58774 ssh2: ECDSA SHA256:Ri1FttUDYah1mI7cQP4QF8tE4Jqe/KzhJvhCRDggR5A
Nov 26 11:43:01 compute-0 systemd-logind[744]: New session 36 of user zuul.
Nov 26 11:43:01 compute-0 systemd[1]: Started Session 36 of User zuul.
Nov 26 11:43:01 compute-0 sshd-session[114348]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 26 11:43:01 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v234: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:43:01 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:43:01 compute-0 ceph-mon[74928]: 9.2 deep-scrub starts
Nov 26 11:43:01 compute-0 ceph-mon[74928]: 9.2 deep-scrub ok
Nov 26 11:43:02 compute-0 python3.9[114501]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 11:43:02 compute-0 ceph-mon[74928]: pgmap v234: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:43:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Nov 26 11:43:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Nov 26 11:43:02 compute-0 python3.9[114655]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 11:43:03 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v235: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:43:03 compute-0 ceph-mon[74928]: 7.11 scrub starts
Nov 26 11:43:03 compute-0 ceph-mon[74928]: 7.11 scrub ok
Nov 26 11:43:03 compute-0 python3.9[114848]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:43:03 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 8.a scrub starts
Nov 26 11:43:03 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 8.a scrub ok
Nov 26 11:43:04 compute-0 sshd-session[114351]: Connection closed by 192.168.122.30 port 58774
Nov 26 11:43:04 compute-0 sshd-session[114348]: pam_unix(sshd:session): session closed for user zuul
Nov 26 11:43:04 compute-0 systemd-logind[744]: Session 36 logged out. Waiting for processes to exit.
Nov 26 11:43:04 compute-0 systemd[1]: session-36.scope: Deactivated successfully.
Nov 26 11:43:04 compute-0 systemd[1]: session-36.scope: Consumed 1.626s CPU time.
Nov 26 11:43:04 compute-0 systemd-logind[744]: Removed session 36.
Nov 26 11:43:04 compute-0 ceph-mon[74928]: pgmap v235: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:43:04 compute-0 ceph-mon[74928]: 8.a scrub starts
Nov 26 11:43:04 compute-0 ceph-mon[74928]: 8.a scrub ok
Nov 26 11:43:05 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 8.18 deep-scrub starts
Nov 26 11:43:05 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 8.18 deep-scrub ok
Nov 26 11:43:05 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v236: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:43:05 compute-0 ceph-mon[74928]: 8.18 deep-scrub starts
Nov 26 11:43:05 compute-0 ceph-mon[74928]: 8.18 deep-scrub ok
Nov 26 11:43:05 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 8.13 deep-scrub starts
Nov 26 11:43:05 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 8.13 deep-scrub ok
Nov 26 11:43:06 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:43:06 compute-0 ceph-mon[74928]: pgmap v236: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:43:06 compute-0 ceph-mon[74928]: 8.13 deep-scrub starts
Nov 26 11:43:06 compute-0 ceph-mon[74928]: 8.13 deep-scrub ok
Nov 26 11:43:06 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Nov 26 11:43:06 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Nov 26 11:43:07 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Nov 26 11:43:07 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Nov 26 11:43:07 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v237: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:43:07 compute-0 ceph-mon[74928]: 11.1e scrub starts
Nov 26 11:43:07 compute-0 ceph-mon[74928]: 11.1e scrub ok
Nov 26 11:43:07 compute-0 ceph-mon[74928]: 11.4 scrub starts
Nov 26 11:43:07 compute-0 ceph-mon[74928]: 11.4 scrub ok
Nov 26 11:43:07 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Nov 26 11:43:08 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Nov 26 11:43:08 compute-0 ceph-mon[74928]: pgmap v237: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:43:08 compute-0 ceph-mon[74928]: 8.1d scrub starts
Nov 26 11:43:08 compute-0 ceph-mon[74928]: 8.1d scrub ok
Nov 26 11:43:08 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Nov 26 11:43:08 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Nov 26 11:43:09 compute-0 sshd-session[114874]: Accepted publickey for zuul from 192.168.122.30 port 57302 ssh2: ECDSA SHA256:Ri1FttUDYah1mI7cQP4QF8tE4Jqe/KzhJvhCRDggR5A
Nov 26 11:43:09 compute-0 systemd-logind[744]: New session 37 of user zuul.
Nov 26 11:43:09 compute-0 systemd[1]: Started Session 37 of User zuul.
Nov 26 11:43:09 compute-0 sshd-session[114874]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 26 11:43:09 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v238: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:43:09 compute-0 ceph-mon[74928]: 11.11 scrub starts
Nov 26 11:43:09 compute-0 ceph-mon[74928]: 11.11 scrub ok
Nov 26 11:43:09 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Nov 26 11:43:09 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Nov 26 11:43:09 compute-0 python3.9[115027]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 11:43:10 compute-0 python3.9[115181]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 11:43:10 compute-0 ceph-mon[74928]: pgmap v238: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:43:10 compute-0 ceph-mon[74928]: 11.1c scrub starts
Nov 26 11:43:10 compute-0 ceph-mon[74928]: 11.1c scrub ok
Nov 26 11:43:10 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Nov 26 11:43:10 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Nov 26 11:43:11 compute-0 sudo[115335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkiyurbfdnoqgiyyvvtsntgfgyyoodnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157390.8426793-40-90722358998082/AnsiballZ_setup.py'
Nov 26 11:43:11 compute-0 sudo[115335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:43:11 compute-0 python3.9[115337]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 11:43:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:43:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:43:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:43:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:43:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:43:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:43:11 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v239: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:43:11 compute-0 sudo[115335]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:11 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:43:11 compute-0 ceph-mon[74928]: 7.13 scrub starts
Nov 26 11:43:11 compute-0 ceph-mon[74928]: 7.13 scrub ok
Nov 26 11:43:11 compute-0 sudo[115419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adbffhxbpasnmwpspoojxhrbumxjcewr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157390.8426793-40-90722358998082/AnsiballZ_dnf.py'
Nov 26 11:43:11 compute-0 sudo[115419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:43:11 compute-0 python3.9[115421]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 11:43:12 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Nov 26 11:43:12 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Nov 26 11:43:12 compute-0 ceph-mon[74928]: pgmap v239: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:43:12 compute-0 ceph-mon[74928]: 7.9 scrub starts
Nov 26 11:43:12 compute-0 ceph-mon[74928]: 7.9 scrub ok
Nov 26 11:43:12 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Nov 26 11:43:12 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Nov 26 11:43:12 compute-0 sudo[115419]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:12 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Nov 26 11:43:12 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Nov 26 11:43:13 compute-0 sudo[115572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykitvkvxdujmgztpkgzhxjqcfzudrpow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157393.0454452-52-63712526539389/AnsiballZ_setup.py'
Nov 26 11:43:13 compute-0 sudo[115572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:43:13 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v240: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:43:13 compute-0 python3.9[115574]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 11:43:13 compute-0 ceph-mon[74928]: 8.12 scrub starts
Nov 26 11:43:13 compute-0 ceph-mon[74928]: 8.12 scrub ok
Nov 26 11:43:13 compute-0 ceph-mon[74928]: 11.6 scrub starts
Nov 26 11:43:13 compute-0 ceph-mon[74928]: 11.6 scrub ok
Nov 26 11:43:13 compute-0 sudo[115572]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:14 compute-0 sudo[115767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltivvlkpvhsrmsmapdsshbbxyqujdatj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157393.8682687-63-156471637705071/AnsiballZ_file.py'
Nov 26 11:43:14 compute-0 sudo[115767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:43:14 compute-0 python3.9[115769]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:43:14 compute-0 sudo[115767]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:14 compute-0 ceph-mon[74928]: pgmap v240: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:43:14 compute-0 sudo[115919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jaqpcgybtwbnerzqhxweqhzacukhhsre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157394.4663265-71-93489818749655/AnsiballZ_command.py'
Nov 26 11:43:14 compute-0 sudo[115919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:43:14 compute-0 python3.9[115921]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:43:14 compute-0 sudo[115919]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:15 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v241: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:43:15 compute-0 sudo[116080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhkjotiksgmowmxcxfecltptuqbzbbel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157395.2022474-79-94664962866581/AnsiballZ_stat.py'
Nov 26 11:43:15 compute-0 sudo[116080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:43:15 compute-0 python3.9[116082]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:43:15 compute-0 sudo[116080]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:15 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Nov 26 11:43:15 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Nov 26 11:43:15 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Nov 26 11:43:15 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Nov 26 11:43:15 compute-0 sudo[116158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcxwrauobujkytbmxssnfmppncgodiwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157395.2022474-79-94664962866581/AnsiballZ_file.py'
Nov 26 11:43:15 compute-0 sudo[116158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:43:15 compute-0 python3.9[116160]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:43:15 compute-0 sudo[116158]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:15 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 8.1f scrub starts
Nov 26 11:43:16 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 8.1f scrub ok
Nov 26 11:43:16 compute-0 sudo[116310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omcqjasmpeenzcybrbhahvtngwedrvbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157396.112572-91-258331779081328/AnsiballZ_stat.py'
Nov 26 11:43:16 compute-0 sudo[116310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:43:16 compute-0 python3.9[116312]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:43:16 compute-0 sudo[116310]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:16 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:43:16 compute-0 sudo[116388]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmvaebphsgjctofkadpulvrsrkhktjob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157396.112572-91-258331779081328/AnsiballZ_file.py'
Nov 26 11:43:16 compute-0 sudo[116388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:43:16 compute-0 ceph-mon[74928]: pgmap v241: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:43:16 compute-0 ceph-mon[74928]: 8.16 scrub starts
Nov 26 11:43:16 compute-0 ceph-mon[74928]: 8.16 scrub ok
Nov 26 11:43:16 compute-0 ceph-mon[74928]: 7.1c scrub starts
Nov 26 11:43:16 compute-0 ceph-mon[74928]: 7.1c scrub ok
Nov 26 11:43:16 compute-0 ceph-mon[74928]: 8.1f scrub starts
Nov 26 11:43:16 compute-0 ceph-mon[74928]: 8.1f scrub ok
Nov 26 11:43:16 compute-0 python3.9[116390]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:43:16 compute-0 sudo[116388]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:17 compute-0 sudo[116540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkviuownmivjceqqsynyqjlcmswefonp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157396.9407074-104-7943492071048/AnsiballZ_ini_file.py'
Nov 26 11:43:17 compute-0 sudo[116540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:43:17 compute-0 python3.9[116542]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:43:17 compute-0 sudo[116540]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:17 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v242: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:43:17 compute-0 sudo[116692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifohizbfrrwbzlhpniwsfigcmdwxnizv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157397.4930174-104-63821801936899/AnsiballZ_ini_file.py'
Nov 26 11:43:17 compute-0 sudo[116692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:43:17 compute-0 python3.9[116694]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:43:17 compute-0 sudo[116692]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:18 compute-0 sudo[116844]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzusvavdvaztuikhxgwagvzfnocdlhjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157397.9213398-104-70875083720755/AnsiballZ_ini_file.py'
Nov 26 11:43:18 compute-0 sudo[116844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:43:18 compute-0 python3.9[116846]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:43:18 compute-0 sudo[116844]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:18 compute-0 sudo[116996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjzopivxiylubdszuadezirrsijfhvjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157398.3510003-104-192914154907940/AnsiballZ_ini_file.py'
Nov 26 11:43:18 compute-0 sudo[116996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:43:18 compute-0 ceph-mon[74928]: pgmap v242: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:43:18 compute-0 python3.9[116998]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:43:18 compute-0 sudo[116996]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:19 compute-0 sudo[117148]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcoqdzkiedkbbtkzhgcevzgardkimikj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157398.8672166-135-7003345009544/AnsiballZ_dnf.py'
Nov 26 11:43:19 compute-0 sudo[117148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:43:19 compute-0 python3.9[117150]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 11:43:19 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v243: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:43:19 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 9.4 deep-scrub starts
Nov 26 11:43:19 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Nov 26 11:43:19 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 9.4 deep-scrub ok
Nov 26 11:43:19 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Nov 26 11:43:20 compute-0 sudo[117148]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:20 compute-0 ceph-mon[74928]: pgmap v243: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:43:20 compute-0 ceph-mon[74928]: 9.4 deep-scrub starts
Nov 26 11:43:20 compute-0 ceph-mon[74928]: 11.12 scrub starts
Nov 26 11:43:20 compute-0 ceph-mon[74928]: 9.4 deep-scrub ok
Nov 26 11:43:20 compute-0 ceph-mon[74928]: 11.12 scrub ok
Nov 26 11:43:20 compute-0 sudo[117301]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmvynbiamsyhsvptvlblwytzkotivpfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157400.5271914-146-42931524217517/AnsiballZ_setup.py'
Nov 26 11:43:20 compute-0 sudo[117301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:43:20 compute-0 python3.9[117303]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 11:43:20 compute-0 sudo[117301]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:21 compute-0 sudo[117455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwwgpwmqdysxroxuvzijhejeukniutgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157401.0912607-154-161747473371393/AnsiballZ_stat.py'
Nov 26 11:43:21 compute-0 sudo[117455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:43:21 compute-0 python3.9[117457]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 11:43:21 compute-0 sudo[117455]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:21 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v244: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:43:21 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:43:21 compute-0 sudo[117607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsxloopuavskmgvnryaydgyyonjdbpjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157401.60397-163-265231612463973/AnsiballZ_stat.py'
Nov 26 11:43:21 compute-0 sudo[117607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:43:21 compute-0 python3.9[117609]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 11:43:21 compute-0 sudo[117607]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:22 compute-0 sudo[117759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grvtmnzfpxpameedmflgqpomxchjxdsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157402.1327395-173-204466970158904/AnsiballZ_command.py'
Nov 26 11:43:22 compute-0 sudo[117759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:43:22 compute-0 python3.9[117761]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:43:22 compute-0 sudo[117759]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:22 compute-0 ceph-mon[74928]: pgmap v244: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:43:22 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 8.17 deep-scrub starts
Nov 26 11:43:22 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 8.17 deep-scrub ok
Nov 26 11:43:22 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 7.e deep-scrub starts
Nov 26 11:43:22 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 7.e deep-scrub ok
Nov 26 11:43:22 compute-0 sudo[117912]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkiunilkqtqejvngvffbwhqccsmvenur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157402.6577666-183-171897953626581/AnsiballZ_service_facts.py'
Nov 26 11:43:22 compute-0 sudo[117912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:43:23 compute-0 python3.9[117914]: ansible-service_facts Invoked
Nov 26 11:43:23 compute-0 network[117931]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 26 11:43:23 compute-0 network[117932]: 'network-scripts' will be removed from distribution in near future.
Nov 26 11:43:23 compute-0 network[117933]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 26 11:43:23 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v245: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:43:23 compute-0 ceph-mon[74928]: 8.17 deep-scrub starts
Nov 26 11:43:23 compute-0 ceph-mon[74928]: 8.17 deep-scrub ok
Nov 26 11:43:23 compute-0 ceph-mon[74928]: 7.e deep-scrub starts
Nov 26 11:43:23 compute-0 ceph-mon[74928]: 7.e deep-scrub ok
Nov 26 11:43:23 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Nov 26 11:43:23 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Nov 26 11:43:24 compute-0 ceph-mon[74928]: pgmap v245: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:43:24 compute-0 ceph-mon[74928]: 11.19 scrub starts
Nov 26 11:43:24 compute-0 ceph-mon[74928]: 11.19 scrub ok
Nov 26 11:43:24 compute-0 sudo[117912]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:25 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v246: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:43:25 compute-0 sudo[118216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alghrattvvbcawvougrgrsepcithhhcj ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1764157405.3689978-198-92168756444097/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1764157405.3689978-198-92168756444097/args'
Nov 26 11:43:25 compute-0 sudo[118216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:43:25 compute-0 sudo[118216]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:25 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 8.19 deep-scrub starts
Nov 26 11:43:25 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 8.19 deep-scrub ok
Nov 26 11:43:25 compute-0 sudo[118383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uruhxwomangkbrwmhqywqevjoalzllif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157405.8116848-209-16581369801752/AnsiballZ_dnf.py'
Nov 26 11:43:25 compute-0 sudo[118383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:43:26 compute-0 python3.9[118385]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 11:43:26 compute-0 sudo[118387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:43:26 compute-0 sudo[118387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:43:26 compute-0 sudo[118387]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:26 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:43:26 compute-0 sudo[118412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:43:26 compute-0 sudo[118412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:43:26 compute-0 sudo[118412]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:26 compute-0 sudo[118437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:43:26 compute-0 sudo[118437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:43:26 compute-0 sudo[118437]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:26 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 8.11 deep-scrub starts
Nov 26 11:43:26 compute-0 sudo[118462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 26 11:43:26 compute-0 sudo[118462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:43:26 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 8.11 deep-scrub ok
Nov 26 11:43:26 compute-0 ceph-mon[74928]: pgmap v246: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:43:26 compute-0 ceph-mon[74928]: 8.19 deep-scrub starts
Nov 26 11:43:26 compute-0 ceph-mon[74928]: 8.19 deep-scrub ok
Nov 26 11:43:26 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Nov 26 11:43:26 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Nov 26 11:43:26 compute-0 sudo[118462]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:26 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 11:43:26 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:43:26 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 11:43:26 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 11:43:26 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 11:43:26 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:43:26 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 7afc0418-bf20-46cb-ae1f-f62bcea28f0f does not exist
Nov 26 11:43:26 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 799730f0-e337-4d85-9f93-84280866ea3d does not exist
Nov 26 11:43:26 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 2b3e3cec-bbf9-4f23-aab2-a0f873726711 does not exist
Nov 26 11:43:26 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 11:43:26 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 11:43:27 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 11:43:27 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 11:43:27 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 11:43:27 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:43:27 compute-0 sudo[118516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:43:27 compute-0 sudo[118516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:43:27 compute-0 sudo[118516]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:27 compute-0 sudo[118541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:43:27 compute-0 sudo[118541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:43:27 compute-0 sudo[118541]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:27 compute-0 sudo[118566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:43:27 compute-0 sudo[118566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:43:27 compute-0 sudo[118566]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:27 compute-0 sudo[118591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 26 11:43:27 compute-0 sudo[118383]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:27 compute-0 sudo[118591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:43:27 compute-0 podman[118670]: 2025-11-26 11:43:27.388480152 +0000 UTC m=+0.029214662 container create 7d45138f813816eeb6c9752a99dec2084629d6ca5a86251667a80b870f8f072b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_austin, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:43:27 compute-0 systemd[1]: Started libpod-conmon-7d45138f813816eeb6c9752a99dec2084629d6ca5a86251667a80b870f8f072b.scope.
Nov 26 11:43:27 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:43:27 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v247: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:43:27 compute-0 podman[118670]: 2025-11-26 11:43:27.439328453 +0000 UTC m=+0.080062971 container init 7d45138f813816eeb6c9752a99dec2084629d6ca5a86251667a80b870f8f072b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_austin, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 26 11:43:27 compute-0 podman[118670]: 2025-11-26 11:43:27.444160757 +0000 UTC m=+0.084895256 container start 7d45138f813816eeb6c9752a99dec2084629d6ca5a86251667a80b870f8f072b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 26 11:43:27 compute-0 podman[118670]: 2025-11-26 11:43:27.445956696 +0000 UTC m=+0.086691205 container attach 7d45138f813816eeb6c9752a99dec2084629d6ca5a86251667a80b870f8f072b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_austin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:43:27 compute-0 lucid_austin[118713]: 167 167
Nov 26 11:43:27 compute-0 systemd[1]: libpod-7d45138f813816eeb6c9752a99dec2084629d6ca5a86251667a80b870f8f072b.scope: Deactivated successfully.
Nov 26 11:43:27 compute-0 podman[118670]: 2025-11-26 11:43:27.448417347 +0000 UTC m=+0.089151847 container died 7d45138f813816eeb6c9752a99dec2084629d6ca5a86251667a80b870f8f072b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:43:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-98394f9e618e383e5f0ba279748f707e5501ba1859713dde0ab7a146e9c09ea0-merged.mount: Deactivated successfully.
Nov 26 11:43:27 compute-0 podman[118670]: 2025-11-26 11:43:27.467871354 +0000 UTC m=+0.108605854 container remove 7d45138f813816eeb6c9752a99dec2084629d6ca5a86251667a80b870f8f072b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_austin, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 26 11:43:27 compute-0 podman[118670]: 2025-11-26 11:43:27.375045449 +0000 UTC m=+0.015779968 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:43:27 compute-0 systemd[1]: libpod-conmon-7d45138f813816eeb6c9752a99dec2084629d6ca5a86251667a80b870f8f072b.scope: Deactivated successfully.
Nov 26 11:43:27 compute-0 podman[118757]: 2025-11-26 11:43:27.57997714 +0000 UTC m=+0.028577709 container create 792b79599e477aa17324e67a60b36e12be14c36665c466dc7249e4f91841ae04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_northcutt, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:43:27 compute-0 systemd[1]: Started libpod-conmon-792b79599e477aa17324e67a60b36e12be14c36665c466dc7249e4f91841ae04.scope.
Nov 26 11:43:27 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:43:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52954fea2663e07d64410dfa54125d8b423b9b4e63a4ffd15cd27f010f184b56/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:43:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52954fea2663e07d64410dfa54125d8b423b9b4e63a4ffd15cd27f010f184b56/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:43:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52954fea2663e07d64410dfa54125d8b423b9b4e63a4ffd15cd27f010f184b56/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:43:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52954fea2663e07d64410dfa54125d8b423b9b4e63a4ffd15cd27f010f184b56/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:43:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52954fea2663e07d64410dfa54125d8b423b9b4e63a4ffd15cd27f010f184b56/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 11:43:27 compute-0 podman[118757]: 2025-11-26 11:43:27.640514376 +0000 UTC m=+0.089114955 container init 792b79599e477aa17324e67a60b36e12be14c36665c466dc7249e4f91841ae04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 26 11:43:27 compute-0 podman[118757]: 2025-11-26 11:43:27.648089114 +0000 UTC m=+0.096689683 container start 792b79599e477aa17324e67a60b36e12be14c36665c466dc7249e4f91841ae04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_northcutt, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:43:27 compute-0 podman[118757]: 2025-11-26 11:43:27.649534872 +0000 UTC m=+0.098135461 container attach 792b79599e477aa17324e67a60b36e12be14c36665c466dc7249e4f91841ae04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_northcutt, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:43:27 compute-0 podman[118757]: 2025-11-26 11:43:27.567116329 +0000 UTC m=+0.015716908 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:43:27 compute-0 ceph-mon[74928]: 8.11 deep-scrub starts
Nov 26 11:43:27 compute-0 ceph-mon[74928]: 8.11 deep-scrub ok
Nov 26 11:43:27 compute-0 ceph-mon[74928]: 8.1e scrub starts
Nov 26 11:43:27 compute-0 ceph-mon[74928]: 8.1e scrub ok
Nov 26 11:43:27 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:43:27 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 11:43:27 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:43:27 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 11:43:27 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 11:43:27 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:43:27 compute-0 sudo[118848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnbfbixmnvuxefmyvpnkaejkdtrfanui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157407.398165-222-214496806341569/AnsiballZ_package_facts.py'
Nov 26 11:43:27 compute-0 sudo[118848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:43:28 compute-0 python3.9[118850]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 26 11:43:28 compute-0 sudo[118848]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:28 compute-0 modest_northcutt[118770]: --> passed data devices: 0 physical, 3 LVM
Nov 26 11:43:28 compute-0 modest_northcutt[118770]: --> relative data size: 1.0
Nov 26 11:43:28 compute-0 modest_northcutt[118770]: --> All data devices are unavailable
Nov 26 11:43:28 compute-0 systemd[1]: libpod-792b79599e477aa17324e67a60b36e12be14c36665c466dc7249e4f91841ae04.scope: Deactivated successfully.
Nov 26 11:43:28 compute-0 podman[118757]: 2025-11-26 11:43:28.478935756 +0000 UTC m=+0.927536326 container died 792b79599e477aa17324e67a60b36e12be14c36665c466dc7249e4f91841ae04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_northcutt, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 26 11:43:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-52954fea2663e07d64410dfa54125d8b423b9b4e63a4ffd15cd27f010f184b56-merged.mount: Deactivated successfully.
Nov 26 11:43:28 compute-0 podman[118757]: 2025-11-26 11:43:28.509405661 +0000 UTC m=+0.958006230 container remove 792b79599e477aa17324e67a60b36e12be14c36665c466dc7249e4f91841ae04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_northcutt, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 26 11:43:28 compute-0 systemd[1]: libpod-conmon-792b79599e477aa17324e67a60b36e12be14c36665c466dc7249e4f91841ae04.scope: Deactivated successfully.
Nov 26 11:43:28 compute-0 sudo[118591]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:28 compute-0 sudo[118909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:43:28 compute-0 sudo[118909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:43:28 compute-0 sudo[118909]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:28 compute-0 sudo[118956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:43:28 compute-0 sudo[118956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:43:28 compute-0 sudo[118956]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:28 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 11.b scrub starts
Nov 26 11:43:28 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 11.b scrub ok
Nov 26 11:43:28 compute-0 sudo[119007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:43:28 compute-0 sudo[119007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:43:28 compute-0 sudo[119007]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:28 compute-0 ceph-mon[74928]: pgmap v247: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:43:28 compute-0 sudo[119036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -- lvm list --format json
Nov 26 11:43:28 compute-0 sudo[119036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:43:28 compute-0 sudo[119134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arkrwrfhtygedmfpmiatricihxcnhwyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157408.5869493-232-112632681411386/AnsiballZ_stat.py'
Nov 26 11:43:28 compute-0 sudo[119134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:43:28 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Nov 26 11:43:28 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Nov 26 11:43:28 compute-0 podman[119168]: 2025-11-26 11:43:28.936534874 +0000 UTC m=+0.026058253 container create 7c8bb90c0bf471667a82c2fa1625e62fbf07a5f18f2ce96dddb8f64b95152917 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mayer, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:43:28 compute-0 python3.9[119138]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:43:28 compute-0 systemd[1]: Started libpod-conmon-7c8bb90c0bf471667a82c2fa1625e62fbf07a5f18f2ce96dddb8f64b95152917.scope.
Nov 26 11:43:28 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:43:28 compute-0 sudo[119134]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:28 compute-0 podman[119168]: 2025-11-26 11:43:28.993135397 +0000 UTC m=+0.082658775 container init 7c8bb90c0bf471667a82c2fa1625e62fbf07a5f18f2ce96dddb8f64b95152917 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mayer, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:43:28 compute-0 podman[119168]: 2025-11-26 11:43:28.997852595 +0000 UTC m=+0.087375974 container start 7c8bb90c0bf471667a82c2fa1625e62fbf07a5f18f2ce96dddb8f64b95152917 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mayer, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Nov 26 11:43:28 compute-0 podman[119168]: 2025-11-26 11:43:28.9988307 +0000 UTC m=+0.088354079 container attach 7c8bb90c0bf471667a82c2fa1625e62fbf07a5f18f2ce96dddb8f64b95152917 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mayer, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:43:29 compute-0 suspicious_mayer[119181]: 167 167
Nov 26 11:43:29 compute-0 systemd[1]: libpod-7c8bb90c0bf471667a82c2fa1625e62fbf07a5f18f2ce96dddb8f64b95152917.scope: Deactivated successfully.
Nov 26 11:43:29 compute-0 podman[119168]: 2025-11-26 11:43:29.001378876 +0000 UTC m=+0.090902265 container died 7c8bb90c0bf471667a82c2fa1625e62fbf07a5f18f2ce96dddb8f64b95152917 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mayer, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 26 11:43:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc4840855dd74e563c7650ccee1ffaed5e96acfc90c7d022825eb1a987c376f2-merged.mount: Deactivated successfully.
Nov 26 11:43:29 compute-0 podman[119168]: 2025-11-26 11:43:29.019484086 +0000 UTC m=+0.109007475 container remove 7c8bb90c0bf471667a82c2fa1625e62fbf07a5f18f2ce96dddb8f64b95152917 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 26 11:43:29 compute-0 podman[119168]: 2025-11-26 11:43:28.926149044 +0000 UTC m=+0.015672443 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:43:29 compute-0 systemd[1]: libpod-conmon-7c8bb90c0bf471667a82c2fa1625e62fbf07a5f18f2ce96dddb8f64b95152917.scope: Deactivated successfully.
Nov 26 11:43:29 compute-0 podman[119252]: 2025-11-26 11:43:29.136875988 +0000 UTC m=+0.028289182 container create 874144b74fd0a5da1c8a711338d21eb591d777de6613fb0be08b70f6cf50a994 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_kare, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:43:29 compute-0 sudo[119288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvkctqrtdalfjlzazlawcarqedscdbmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157408.5869493-232-112632681411386/AnsiballZ_file.py'
Nov 26 11:43:29 compute-0 sudo[119288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:43:29 compute-0 systemd[1]: Started libpod-conmon-874144b74fd0a5da1c8a711338d21eb591d777de6613fb0be08b70f6cf50a994.scope.
Nov 26 11:43:29 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:43:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/004d2654c15acec5215f27bc9d22cb016ada9791ae65e498a29297f58e70bf98/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:43:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/004d2654c15acec5215f27bc9d22cb016ada9791ae65e498a29297f58e70bf98/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:43:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/004d2654c15acec5215f27bc9d22cb016ada9791ae65e498a29297f58e70bf98/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:43:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/004d2654c15acec5215f27bc9d22cb016ada9791ae65e498a29297f58e70bf98/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:43:29 compute-0 podman[119252]: 2025-11-26 11:43:29.196839614 +0000 UTC m=+0.088252828 container init 874144b74fd0a5da1c8a711338d21eb591d777de6613fb0be08b70f6cf50a994 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_kare, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:43:29 compute-0 podman[119252]: 2025-11-26 11:43:29.205167263 +0000 UTC m=+0.096580456 container start 874144b74fd0a5da1c8a711338d21eb591d777de6613fb0be08b70f6cf50a994 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 26 11:43:29 compute-0 podman[119252]: 2025-11-26 11:43:29.207761336 +0000 UTC m=+0.099174530 container attach 874144b74fd0a5da1c8a711338d21eb591d777de6613fb0be08b70f6cf50a994 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:43:29 compute-0 podman[119252]: 2025-11-26 11:43:29.124905017 +0000 UTC m=+0.016318221 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:43:29 compute-0 python3.9[119293]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:43:29 compute-0 sudo[119288]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:29 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v248: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:43:29 compute-0 ceph-mon[74928]: 11.b scrub starts
Nov 26 11:43:29 compute-0 ceph-mon[74928]: 11.b scrub ok
Nov 26 11:43:29 compute-0 ceph-mon[74928]: 8.1a scrub starts
Nov 26 11:43:29 compute-0 ceph-mon[74928]: 8.1a scrub ok
Nov 26 11:43:29 compute-0 sudo[119449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmzikioyghlqhvijeuodlrsbpragqjvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157409.5083723-244-270099705742425/AnsiballZ_stat.py'
Nov 26 11:43:29 compute-0 sudo[119449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:43:29 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 9.a scrub starts
Nov 26 11:43:29 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 9.a scrub ok
Nov 26 11:43:29 compute-0 python3.9[119451]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:43:29 compute-0 objective_kare[119294]: {
Nov 26 11:43:29 compute-0 objective_kare[119294]:     "0": [
Nov 26 11:43:29 compute-0 objective_kare[119294]:         {
Nov 26 11:43:29 compute-0 objective_kare[119294]:             "devices": [
Nov 26 11:43:29 compute-0 objective_kare[119294]:                 "/dev/loop3"
Nov 26 11:43:29 compute-0 objective_kare[119294]:             ],
Nov 26 11:43:29 compute-0 objective_kare[119294]:             "lv_name": "ceph_lv0",
Nov 26 11:43:29 compute-0 objective_kare[119294]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:43:29 compute-0 objective_kare[119294]:             "lv_size": "21470642176",
Nov 26 11:43:29 compute-0 objective_kare[119294]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a9ad59a0-aa2e-4d92-b571-519d2d145b6a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:43:29 compute-0 objective_kare[119294]:             "lv_uuid": "Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ",
Nov 26 11:43:29 compute-0 objective_kare[119294]:             "name": "ceph_lv0",
Nov 26 11:43:29 compute-0 objective_kare[119294]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:43:29 compute-0 objective_kare[119294]:             "tags": {
Nov 26 11:43:29 compute-0 objective_kare[119294]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:43:29 compute-0 objective_kare[119294]:                 "ceph.block_uuid": "Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ",
Nov 26 11:43:29 compute-0 objective_kare[119294]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:43:29 compute-0 objective_kare[119294]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:43:29 compute-0 objective_kare[119294]:                 "ceph.cluster_name": "ceph",
Nov 26 11:43:29 compute-0 objective_kare[119294]:                 "ceph.crush_device_class": "",
Nov 26 11:43:29 compute-0 objective_kare[119294]:                 "ceph.encrypted": "0",
Nov 26 11:43:29 compute-0 objective_kare[119294]:                 "ceph.osd_fsid": "a9ad59a0-aa2e-4d92-b571-519d2d145b6a",
Nov 26 11:43:29 compute-0 objective_kare[119294]:                 "ceph.osd_id": "0",
Nov 26 11:43:29 compute-0 objective_kare[119294]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:43:29 compute-0 objective_kare[119294]:                 "ceph.type": "block",
Nov 26 11:43:29 compute-0 objective_kare[119294]:                 "ceph.vdo": "0"
Nov 26 11:43:29 compute-0 objective_kare[119294]:             },
Nov 26 11:43:29 compute-0 objective_kare[119294]:             "type": "block",
Nov 26 11:43:29 compute-0 objective_kare[119294]:             "vg_name": "ceph_vg0"
Nov 26 11:43:29 compute-0 objective_kare[119294]:         }
Nov 26 11:43:29 compute-0 objective_kare[119294]:     ],
Nov 26 11:43:29 compute-0 objective_kare[119294]:     "1": [
Nov 26 11:43:29 compute-0 objective_kare[119294]:         {
Nov 26 11:43:29 compute-0 objective_kare[119294]:             "devices": [
Nov 26 11:43:29 compute-0 objective_kare[119294]:                 "/dev/loop4"
Nov 26 11:43:29 compute-0 objective_kare[119294]:             ],
Nov 26 11:43:29 compute-0 objective_kare[119294]:             "lv_name": "ceph_lv1",
Nov 26 11:43:29 compute-0 objective_kare[119294]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:43:29 compute-0 objective_kare[119294]:             "lv_size": "21470642176",
Nov 26 11:43:29 compute-0 objective_kare[119294]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2627095b-eef8-4027-bfef-68bf7cb6801f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:43:29 compute-0 objective_kare[119294]:             "lv_uuid": "T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS",
Nov 26 11:43:29 compute-0 objective_kare[119294]:             "name": "ceph_lv1",
Nov 26 11:43:29 compute-0 objective_kare[119294]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:43:29 compute-0 objective_kare[119294]:             "tags": {
Nov 26 11:43:29 compute-0 objective_kare[119294]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:43:29 compute-0 objective_kare[119294]:                 "ceph.block_uuid": "T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS",
Nov 26 11:43:29 compute-0 objective_kare[119294]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:43:29 compute-0 objective_kare[119294]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:43:29 compute-0 objective_kare[119294]:                 "ceph.cluster_name": "ceph",
Nov 26 11:43:29 compute-0 objective_kare[119294]:                 "ceph.crush_device_class": "",
Nov 26 11:43:29 compute-0 objective_kare[119294]:                 "ceph.encrypted": "0",
Nov 26 11:43:29 compute-0 objective_kare[119294]:                 "ceph.osd_fsid": "2627095b-eef8-4027-bfef-68bf7cb6801f",
Nov 26 11:43:29 compute-0 objective_kare[119294]:                 "ceph.osd_id": "1",
Nov 26 11:43:29 compute-0 objective_kare[119294]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:43:29 compute-0 objective_kare[119294]:                 "ceph.type": "block",
Nov 26 11:43:29 compute-0 objective_kare[119294]:                 "ceph.vdo": "0"
Nov 26 11:43:29 compute-0 objective_kare[119294]:             },
Nov 26 11:43:29 compute-0 objective_kare[119294]:             "type": "block",
Nov 26 11:43:29 compute-0 objective_kare[119294]:             "vg_name": "ceph_vg1"
Nov 26 11:43:29 compute-0 objective_kare[119294]:         }
Nov 26 11:43:29 compute-0 objective_kare[119294]:     ],
Nov 26 11:43:29 compute-0 objective_kare[119294]:     "2": [
Nov 26 11:43:29 compute-0 objective_kare[119294]:         {
Nov 26 11:43:29 compute-0 objective_kare[119294]:             "devices": [
Nov 26 11:43:29 compute-0 objective_kare[119294]:                 "/dev/loop5"
Nov 26 11:43:29 compute-0 objective_kare[119294]:             ],
Nov 26 11:43:29 compute-0 objective_kare[119294]:             "lv_name": "ceph_lv2",
Nov 26 11:43:29 compute-0 objective_kare[119294]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:43:29 compute-0 objective_kare[119294]:             "lv_size": "21470642176",
Nov 26 11:43:29 compute-0 objective_kare[119294]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d56156fb-7361-4bef-b06b-1320109b4323,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:43:29 compute-0 objective_kare[119294]:             "lv_uuid": "PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF",
Nov 26 11:43:29 compute-0 objective_kare[119294]:             "name": "ceph_lv2",
Nov 26 11:43:29 compute-0 objective_kare[119294]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:43:29 compute-0 objective_kare[119294]:             "tags": {
Nov 26 11:43:29 compute-0 objective_kare[119294]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:43:29 compute-0 objective_kare[119294]:                 "ceph.block_uuid": "PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF",
Nov 26 11:43:29 compute-0 objective_kare[119294]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:43:29 compute-0 objective_kare[119294]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:43:29 compute-0 objective_kare[119294]:                 "ceph.cluster_name": "ceph",
Nov 26 11:43:29 compute-0 objective_kare[119294]:                 "ceph.crush_device_class": "",
Nov 26 11:43:29 compute-0 objective_kare[119294]:                 "ceph.encrypted": "0",
Nov 26 11:43:29 compute-0 objective_kare[119294]:                 "ceph.osd_fsid": "d56156fb-7361-4bef-b06b-1320109b4323",
Nov 26 11:43:29 compute-0 objective_kare[119294]:                 "ceph.osd_id": "2",
Nov 26 11:43:29 compute-0 objective_kare[119294]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:43:29 compute-0 objective_kare[119294]:                 "ceph.type": "block",
Nov 26 11:43:29 compute-0 objective_kare[119294]:                 "ceph.vdo": "0"
Nov 26 11:43:29 compute-0 objective_kare[119294]:             },
Nov 26 11:43:29 compute-0 objective_kare[119294]:             "type": "block",
Nov 26 11:43:29 compute-0 objective_kare[119294]:             "vg_name": "ceph_vg2"
Nov 26 11:43:29 compute-0 objective_kare[119294]:         }
Nov 26 11:43:29 compute-0 objective_kare[119294]:     ]
Nov 26 11:43:29 compute-0 objective_kare[119294]: }
Nov 26 11:43:29 compute-0 sudo[119449]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:29 compute-0 systemd[1]: libpod-874144b74fd0a5da1c8a711338d21eb591d777de6613fb0be08b70f6cf50a994.scope: Deactivated successfully.
Nov 26 11:43:29 compute-0 conmon[119294]: conmon 874144b74fd0a5da1c8a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-874144b74fd0a5da1c8a711338d21eb591d777de6613fb0be08b70f6cf50a994.scope/container/memory.events
Nov 26 11:43:29 compute-0 podman[119458]: 2025-11-26 11:43:29.949969758 +0000 UTC m=+0.020204111 container died 874144b74fd0a5da1c8a711338d21eb591d777de6613fb0be08b70f6cf50a994 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:43:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-004d2654c15acec5215f27bc9d22cb016ada9791ae65e498a29297f58e70bf98-merged.mount: Deactivated successfully.
Nov 26 11:43:29 compute-0 podman[119458]: 2025-11-26 11:43:29.982901264 +0000 UTC m=+0.053135597 container remove 874144b74fd0a5da1c8a711338d21eb591d777de6613fb0be08b70f6cf50a994 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_kare, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:43:29 compute-0 systemd[1]: libpod-conmon-874144b74fd0a5da1c8a711338d21eb591d777de6613fb0be08b70f6cf50a994.scope: Deactivated successfully.
Nov 26 11:43:30 compute-0 sudo[119036]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:30 compute-0 sudo[119517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:43:30 compute-0 sudo[119517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:43:30 compute-0 sudo[119517]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:30 compute-0 sudo[119568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfipevcuunjjraprwpbhakzusfclillu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157409.5083723-244-270099705742425/AnsiballZ_file.py'
Nov 26 11:43:30 compute-0 sudo[119568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:43:30 compute-0 sudo[119569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:43:30 compute-0 sudo[119569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:43:30 compute-0 sudo[119569]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:30 compute-0 sudo[119596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:43:30 compute-0 sudo[119596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:43:30 compute-0 sudo[119596]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:30 compute-0 sudo[119621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -- raw list --format json
Nov 26 11:43:30 compute-0 sudo[119621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:43:30 compute-0 python3.9[119578]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:43:30 compute-0 sudo[119568]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:30 compute-0 podman[119701]: 2025-11-26 11:43:30.45205717 +0000 UTC m=+0.028441339 container create 5d3e1c62ae69234457bede2afb679a50bb96873e5e3e4cbc4711d4fcf6be8142 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_dijkstra, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:43:30 compute-0 systemd[1]: Started libpod-conmon-5d3e1c62ae69234457bede2afb679a50bb96873e5e3e4cbc4711d4fcf6be8142.scope.
Nov 26 11:43:30 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:43:30 compute-0 podman[119701]: 2025-11-26 11:43:30.509332206 +0000 UTC m=+0.085716385 container init 5d3e1c62ae69234457bede2afb679a50bb96873e5e3e4cbc4711d4fcf6be8142 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_dijkstra, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 26 11:43:30 compute-0 podman[119701]: 2025-11-26 11:43:30.514359819 +0000 UTC m=+0.090743988 container start 5d3e1c62ae69234457bede2afb679a50bb96873e5e3e4cbc4711d4fcf6be8142 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_dijkstra, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:43:30 compute-0 podman[119701]: 2025-11-26 11:43:30.515751253 +0000 UTC m=+0.092135442 container attach 5d3e1c62ae69234457bede2afb679a50bb96873e5e3e4cbc4711d4fcf6be8142 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:43:30 compute-0 epic_dijkstra[119714]: 167 167
Nov 26 11:43:30 compute-0 systemd[1]: libpod-5d3e1c62ae69234457bede2afb679a50bb96873e5e3e4cbc4711d4fcf6be8142.scope: Deactivated successfully.
Nov 26 11:43:30 compute-0 podman[119701]: 2025-11-26 11:43:30.517810618 +0000 UTC m=+0.094194787 container died 5d3e1c62ae69234457bede2afb679a50bb96873e5e3e4cbc4711d4fcf6be8142 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_dijkstra, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:43:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-607d2cae52261736f1776f55e4839fa6066e507e06b5a9c45170c5932b48bad5-merged.mount: Deactivated successfully.
Nov 26 11:43:30 compute-0 podman[119701]: 2025-11-26 11:43:30.440496534 +0000 UTC m=+0.016880723 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:43:30 compute-0 podman[119701]: 2025-11-26 11:43:30.537097788 +0000 UTC m=+0.113481957 container remove 5d3e1c62ae69234457bede2afb679a50bb96873e5e3e4cbc4711d4fcf6be8142 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:43:30 compute-0 systemd[1]: libpod-conmon-5d3e1c62ae69234457bede2afb679a50bb96873e5e3e4cbc4711d4fcf6be8142.scope: Deactivated successfully.
Nov 26 11:43:30 compute-0 podman[119759]: 2025-11-26 11:43:30.654900945 +0000 UTC m=+0.029037923 container create 8a0fe71d6a784d0e908487e9aacbd160c3276658751f3d223f7b3e6ff8360458 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_davinci, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:43:30 compute-0 systemd[1]: Started libpod-conmon-8a0fe71d6a784d0e908487e9aacbd160c3276658751f3d223f7b3e6ff8360458.scope.
Nov 26 11:43:30 compute-0 ceph-mon[74928]: pgmap v248: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:43:30 compute-0 ceph-mon[74928]: 9.a scrub starts
Nov 26 11:43:30 compute-0 ceph-mon[74928]: 9.a scrub ok
Nov 26 11:43:30 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:43:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c91dfb6742b341d16004e3cfabc609775573bda716b3a8a88e064edcade44a21/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:43:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c91dfb6742b341d16004e3cfabc609775573bda716b3a8a88e064edcade44a21/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:43:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c91dfb6742b341d16004e3cfabc609775573bda716b3a8a88e064edcade44a21/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:43:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c91dfb6742b341d16004e3cfabc609775573bda716b3a8a88e064edcade44a21/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:43:30 compute-0 podman[119759]: 2025-11-26 11:43:30.721154676 +0000 UTC m=+0.095291665 container init 8a0fe71d6a784d0e908487e9aacbd160c3276658751f3d223f7b3e6ff8360458 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_davinci, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 26 11:43:30 compute-0 podman[119759]: 2025-11-26 11:43:30.725967444 +0000 UTC m=+0.100104423 container start 8a0fe71d6a784d0e908487e9aacbd160c3276658751f3d223f7b3e6ff8360458 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_davinci, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:43:30 compute-0 podman[119759]: 2025-11-26 11:43:30.72698348 +0000 UTC m=+0.101120459 container attach 8a0fe71d6a784d0e908487e9aacbd160c3276658751f3d223f7b3e6ff8360458 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_davinci, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 11:43:30 compute-0 podman[119759]: 2025-11-26 11:43:30.643394081 +0000 UTC m=+0.017531079 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:43:30 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 9.10 deep-scrub starts
Nov 26 11:43:30 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 9.10 deep-scrub ok
Nov 26 11:43:30 compute-0 sudo[119879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpkbxnygkrishzhbpjpnpxatrdtpktra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157410.620893-262-191823029980379/AnsiballZ_lineinfile.py'
Nov 26 11:43:30 compute-0 sudo[119879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:43:31 compute-0 python3.9[119881]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:43:31 compute-0 sudo[119879]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:31 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v249: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:43:31 compute-0 mystifying_davinci[119801]: {
Nov 26 11:43:31 compute-0 mystifying_davinci[119801]:     "2627095b-eef8-4027-bfef-68bf7cb6801f": {
Nov 26 11:43:31 compute-0 mystifying_davinci[119801]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:43:31 compute-0 mystifying_davinci[119801]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 11:43:31 compute-0 mystifying_davinci[119801]:         "osd_id": 1,
Nov 26 11:43:31 compute-0 mystifying_davinci[119801]:         "osd_uuid": "2627095b-eef8-4027-bfef-68bf7cb6801f",
Nov 26 11:43:31 compute-0 mystifying_davinci[119801]:         "type": "bluestore"
Nov 26 11:43:31 compute-0 mystifying_davinci[119801]:     },
Nov 26 11:43:31 compute-0 mystifying_davinci[119801]:     "a9ad59a0-aa2e-4d92-b571-519d2d145b6a": {
Nov 26 11:43:31 compute-0 mystifying_davinci[119801]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:43:31 compute-0 mystifying_davinci[119801]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 11:43:31 compute-0 mystifying_davinci[119801]:         "osd_id": 0,
Nov 26 11:43:31 compute-0 mystifying_davinci[119801]:         "osd_uuid": "a9ad59a0-aa2e-4d92-b571-519d2d145b6a",
Nov 26 11:43:31 compute-0 mystifying_davinci[119801]:         "type": "bluestore"
Nov 26 11:43:31 compute-0 mystifying_davinci[119801]:     },
Nov 26 11:43:31 compute-0 mystifying_davinci[119801]:     "d56156fb-7361-4bef-b06b-1320109b4323": {
Nov 26 11:43:31 compute-0 mystifying_davinci[119801]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:43:31 compute-0 mystifying_davinci[119801]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 11:43:31 compute-0 mystifying_davinci[119801]:         "osd_id": 2,
Nov 26 11:43:31 compute-0 mystifying_davinci[119801]:         "osd_uuid": "d56156fb-7361-4bef-b06b-1320109b4323",
Nov 26 11:43:31 compute-0 mystifying_davinci[119801]:         "type": "bluestore"
Nov 26 11:43:31 compute-0 mystifying_davinci[119801]:     }
Nov 26 11:43:31 compute-0 mystifying_davinci[119801]: }
Nov 26 11:43:31 compute-0 systemd[1]: libpod-8a0fe71d6a784d0e908487e9aacbd160c3276658751f3d223f7b3e6ff8360458.scope: Deactivated successfully.
Nov 26 11:43:31 compute-0 conmon[119801]: conmon 8a0fe71d6a784d0e9084 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8a0fe71d6a784d0e908487e9aacbd160c3276658751f3d223f7b3e6ff8360458.scope/container/memory.events
Nov 26 11:43:31 compute-0 podman[119759]: 2025-11-26 11:43:31.506844514 +0000 UTC m=+0.880981494 container died 8a0fe71d6a784d0e908487e9aacbd160c3276658751f3d223f7b3e6ff8360458 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:43:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-c91dfb6742b341d16004e3cfabc609775573bda716b3a8a88e064edcade44a21-merged.mount: Deactivated successfully.
Nov 26 11:43:31 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:43:31 compute-0 podman[119759]: 2025-11-26 11:43:31.546053964 +0000 UTC m=+0.920190942 container remove 8a0fe71d6a784d0e908487e9aacbd160c3276658751f3d223f7b3e6ff8360458 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:43:31 compute-0 systemd[1]: libpod-conmon-8a0fe71d6a784d0e908487e9aacbd160c3276658751f3d223f7b3e6ff8360458.scope: Deactivated successfully.
Nov 26 11:43:31 compute-0 sudo[119621]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:31 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 11:43:31 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:43:31 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 11:43:31 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:43:31 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 53218ff5-0c2d-4df4-9a15-c90fa4588253 does not exist
Nov 26 11:43:31 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 6171b5ef-3936-4e21-9b5d-bb03bc8f41c6 does not exist
Nov 26 11:43:31 compute-0 sudo[120002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:43:31 compute-0 sudo[120002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:43:31 compute-0 sudo[120002]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:31 compute-0 sudo[120046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 26 11:43:31 compute-0 sudo[120046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:43:31 compute-0 sudo[120046]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:31 compute-0 ceph-mon[74928]: 9.10 deep-scrub starts
Nov 26 11:43:31 compute-0 ceph-mon[74928]: 9.10 deep-scrub ok
Nov 26 11:43:31 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:43:31 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:43:31 compute-0 sudo[120119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icuzymprkpqhklyejwfsbnjvjacintqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157411.4934533-277-43913585344406/AnsiballZ_setup.py'
Nov 26 11:43:31 compute-0 sudo[120119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:43:31 compute-0 python3.9[120121]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 11:43:32 compute-0 sudo[120119]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:32 compute-0 sudo[120203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nupweqvhemcbrycztnrcugfabpivdcjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157411.4934533-277-43913585344406/AnsiballZ_systemd.py'
Nov 26 11:43:32 compute-0 sudo[120203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:43:32 compute-0 ceph-mon[74928]: pgmap v249: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:43:32 compute-0 python3.9[120205]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 11:43:32 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Nov 26 11:43:32 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Nov 26 11:43:32 compute-0 sudo[120203]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:33 compute-0 sshd-session[114877]: Connection closed by 192.168.122.30 port 57302
Nov 26 11:43:33 compute-0 sshd-session[114874]: pam_unix(sshd:session): session closed for user zuul
Nov 26 11:43:33 compute-0 systemd[1]: session-37.scope: Deactivated successfully.
Nov 26 11:43:33 compute-0 systemd[1]: session-37.scope: Consumed 16.835s CPU time.
Nov 26 11:43:33 compute-0 systemd-logind[744]: Session 37 logged out. Waiting for processes to exit.
Nov 26 11:43:33 compute-0 systemd-logind[744]: Removed session 37.
Nov 26 11:43:33 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v250: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:43:33 compute-0 ceph-mon[74928]: 7.6 scrub starts
Nov 26 11:43:33 compute-0 ceph-mon[74928]: 7.6 scrub ok
Nov 26 11:43:34 compute-0 ceph-mon[74928]: pgmap v250: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:43:34 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 9.12 deep-scrub starts
Nov 26 11:43:34 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 9.12 deep-scrub ok
Nov 26 11:43:35 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v251: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:43:35 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Nov 26 11:43:35 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Nov 26 11:43:35 compute-0 ceph-mon[74928]: 9.12 deep-scrub starts
Nov 26 11:43:35 compute-0 ceph-mon[74928]: 9.12 deep-scrub ok
Nov 26 11:43:36 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:43:36 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Nov 26 11:43:36 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Nov 26 11:43:36 compute-0 ceph-mon[74928]: pgmap v251: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:43:36 compute-0 ceph-mon[74928]: 9.14 scrub starts
Nov 26 11:43:36 compute-0 ceph-mon[74928]: 9.14 scrub ok
Nov 26 11:43:36 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Nov 26 11:43:36 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Nov 26 11:43:36 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Nov 26 11:43:36 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Nov 26 11:43:37 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v252: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:43:37 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Nov 26 11:43:37 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Nov 26 11:43:37 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 11.5 deep-scrub starts
Nov 26 11:43:37 compute-0 ceph-mon[74928]: 11.18 scrub starts
Nov 26 11:43:37 compute-0 ceph-mon[74928]: 11.18 scrub ok
Nov 26 11:43:37 compute-0 ceph-mon[74928]: 9.1a scrub starts
Nov 26 11:43:37 compute-0 ceph-mon[74928]: 9.1a scrub ok
Nov 26 11:43:37 compute-0 ceph-mon[74928]: 11.10 scrub starts
Nov 26 11:43:37 compute-0 ceph-mon[74928]: 11.10 scrub ok
Nov 26 11:43:37 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 11.5 deep-scrub ok
Nov 26 11:43:37 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Nov 26 11:43:37 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Nov 26 11:43:38 compute-0 sshd-session[120232]: Accepted publickey for zuul from 192.168.122.30 port 43470 ssh2: ECDSA SHA256:Ri1FttUDYah1mI7cQP4QF8tE4Jqe/KzhJvhCRDggR5A
Nov 26 11:43:38 compute-0 systemd-logind[744]: New session 38 of user zuul.
Nov 26 11:43:38 compute-0 systemd[1]: Started Session 38 of User zuul.
Nov 26 11:43:38 compute-0 sshd-session[120232]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 26 11:43:38 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Nov 26 11:43:38 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Nov 26 11:43:38 compute-0 ceph-mon[74928]: pgmap v252: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:43:38 compute-0 ceph-mon[74928]: 11.2 scrub starts
Nov 26 11:43:38 compute-0 ceph-mon[74928]: 11.2 scrub ok
Nov 26 11:43:38 compute-0 ceph-mon[74928]: 11.5 deep-scrub starts
Nov 26 11:43:38 compute-0 ceph-mon[74928]: 11.5 deep-scrub ok
Nov 26 11:43:38 compute-0 ceph-mon[74928]: 11.17 scrub starts
Nov 26 11:43:38 compute-0 ceph-mon[74928]: 11.17 scrub ok
Nov 26 11:43:38 compute-0 sudo[120385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gexvyhngjjhwkpsepkankbvmxutnrwik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157418.4419193-22-116353806065263/AnsiballZ_file.py'
Nov 26 11:43:38 compute-0 sudo[120385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:43:38 compute-0 python3.9[120387]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:43:38 compute-0 sudo[120385]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:39 compute-0 sudo[120537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywbfstgnzsscawcxjzgscelqwxojbslh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157419.0657165-34-280837359566679/AnsiballZ_stat.py'
Nov 26 11:43:39 compute-0 sudo[120537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:43:39 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v253: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:43:39 compute-0 python3.9[120539]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:43:39 compute-0 sudo[120537]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:39 compute-0 sudo[120615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhpwzvehnzlketebzpdfegebmjhyiskn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157419.0657165-34-280837359566679/AnsiballZ_file.py'
Nov 26 11:43:39 compute-0 sudo[120615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:43:39 compute-0 ceph-mon[74928]: 8.1b scrub starts
Nov 26 11:43:39 compute-0 ceph-mon[74928]: 8.1b scrub ok
Nov 26 11:43:39 compute-0 python3.9[120617]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:43:39 compute-0 sudo[120615]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:40 compute-0 sshd-session[120235]: Connection closed by 192.168.122.30 port 43470
Nov 26 11:43:40 compute-0 sshd-session[120232]: pam_unix(sshd:session): session closed for user zuul
Nov 26 11:43:40 compute-0 systemd[1]: session-38.scope: Deactivated successfully.
Nov 26 11:43:40 compute-0 systemd[1]: session-38.scope: Consumed 1.079s CPU time.
Nov 26 11:43:40 compute-0 systemd-logind[744]: Session 38 logged out. Waiting for processes to exit.
Nov 26 11:43:40 compute-0 systemd-logind[744]: Removed session 38.
Nov 26 11:43:40 compute-0 ceph-mon[74928]: pgmap v253: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:43:41 compute-0 ceph-mgr[75197]: [balancer INFO root] Optimize plan auto_2025-11-26_11:43:41
Nov 26 11:43:41 compute-0 ceph-mgr[75197]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 11:43:41 compute-0 ceph-mgr[75197]: [balancer INFO root] do_upmap
Nov 26 11:43:41 compute-0 ceph-mgr[75197]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.control', 'volumes', 'default.rgw.log', 'default.rgw.meta', 'images', '.mgr', 'cephfs.cephfs.meta', '.rgw.root', 'vms', 'backups']
Nov 26 11:43:41 compute-0 ceph-mgr[75197]: [balancer INFO root] prepared 0/10 changes
Nov 26 11:43:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:43:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:43:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:43:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:43:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:43:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:43:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 11:43:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 11:43:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 11:43:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 11:43:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 11:43:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 11:43:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 11:43:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 11:43:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 11:43:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 11:43:41 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v254: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:43:41 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:43:42 compute-0 ceph-mon[74928]: pgmap v254: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:43:43 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v255: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:43:43 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 11.1a deep-scrub starts
Nov 26 11:43:43 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 11.1a deep-scrub ok
Nov 26 11:43:44 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Nov 26 11:43:44 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Nov 26 11:43:44 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Nov 26 11:43:44 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Nov 26 11:43:44 compute-0 ceph-mon[74928]: pgmap v255: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:43:44 compute-0 ceph-mon[74928]: 11.1a deep-scrub starts
Nov 26 11:43:44 compute-0 ceph-mon[74928]: 11.1a deep-scrub ok
Nov 26 11:43:45 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v256: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:43:45 compute-0 sshd-session[120642]: Accepted publickey for zuul from 192.168.122.30 port 45452 ssh2: ECDSA SHA256:Ri1FttUDYah1mI7cQP4QF8tE4Jqe/KzhJvhCRDggR5A
Nov 26 11:43:45 compute-0 systemd-logind[744]: New session 39 of user zuul.
Nov 26 11:43:45 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 11.a scrub starts
Nov 26 11:43:45 compute-0 systemd[1]: Started Session 39 of User zuul.
Nov 26 11:43:45 compute-0 sshd-session[120642]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 26 11:43:45 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 11.a scrub ok
Nov 26 11:43:45 compute-0 ceph-mon[74928]: 11.1f scrub starts
Nov 26 11:43:45 compute-0 ceph-mon[74928]: 11.1f scrub ok
Nov 26 11:43:45 compute-0 ceph-mon[74928]: 11.7 scrub starts
Nov 26 11:43:45 compute-0 ceph-mon[74928]: 11.7 scrub ok
Nov 26 11:43:46 compute-0 python3.9[120795]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 11:43:46 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:43:46 compute-0 ceph-mon[74928]: pgmap v256: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:43:46 compute-0 ceph-mon[74928]: 11.a scrub starts
Nov 26 11:43:46 compute-0 ceph-mon[74928]: 11.a scrub ok
Nov 26 11:43:47 compute-0 sudo[120949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usjcqutddjybybexkbdwtudghinfkolk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157426.8111804-33-47089829396620/AnsiballZ_file.py'
Nov 26 11:43:47 compute-0 sudo[120949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:43:47 compute-0 python3.9[120951]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:43:47 compute-0 sudo[120949]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:47 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v257: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:43:47 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 11.c scrub starts
Nov 26 11:43:47 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 11.c scrub ok
Nov 26 11:43:47 compute-0 sudo[121124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-puvrwnjelimshkxefonsvgkmhzygdamn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157427.4593234-41-201102265204340/AnsiballZ_stat.py'
Nov 26 11:43:47 compute-0 sudo[121124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:43:47 compute-0 python3.9[121126]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:43:47 compute-0 sudo[121124]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:48 compute-0 sudo[121202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yunbjypsrfihytsgxxamxwqinpavbtza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157427.4593234-41-201102265204340/AnsiballZ_file.py'
Nov 26 11:43:48 compute-0 sudo[121202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:43:48 compute-0 python3.9[121204]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.bvu29rs6 recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:43:48 compute-0 sudo[121202]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:48 compute-0 sudo[121354]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmcldwzrkhwhwzcdgwujvghrrxcuhlmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157428.5351076-61-35222969425559/AnsiballZ_stat.py'
Nov 26 11:43:48 compute-0 sudo[121354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:43:48 compute-0 ceph-mon[74928]: pgmap v257: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:43:48 compute-0 ceph-mon[74928]: 11.c scrub starts
Nov 26 11:43:48 compute-0 ceph-mon[74928]: 11.c scrub ok
Nov 26 11:43:48 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 10.d scrub starts
Nov 26 11:43:48 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 10.d scrub ok
Nov 26 11:43:48 compute-0 python3.9[121356]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:43:48 compute-0 sudo[121354]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:49 compute-0 sudo[121432]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbytlvuoyiwyvqrwbjfkferorwrokshz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157428.5351076-61-35222969425559/AnsiballZ_file.py'
Nov 26 11:43:49 compute-0 sudo[121432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:43:49 compute-0 python3.9[121434]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.72mcjch5 recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:43:49 compute-0 sudo[121432]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:49 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v258: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:43:49 compute-0 sudo[121584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbhuyoluhicrdutawfaylvepyydgjnpg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157429.3467453-74-145557850629365/AnsiballZ_file.py'
Nov 26 11:43:49 compute-0 sudo[121584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:43:49 compute-0 python3.9[121586]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:43:49 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Nov 26 11:43:49 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 11.13 scrub starts
Nov 26 11:43:49 compute-0 sudo[121584]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:49 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Nov 26 11:43:49 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 11.13 scrub ok
Nov 26 11:43:49 compute-0 ceph-mon[74928]: 10.d scrub starts
Nov 26 11:43:49 compute-0 ceph-mon[74928]: 10.d scrub ok
Nov 26 11:43:49 compute-0 sudo[121736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksjaslyssjmuwvazzfjrwpftszcjvttg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157429.8036137-82-31675014672573/AnsiballZ_stat.py'
Nov 26 11:43:49 compute-0 sudo[121736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:43:50 compute-0 python3.9[121738]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:43:50 compute-0 sudo[121736]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:50 compute-0 sudo[121814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzfrtqkhwiebkyhwlhuhbtceijypmsav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157429.8036137-82-31675014672573/AnsiballZ_file.py'
Nov 26 11:43:50 compute-0 sudo[121814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:43:50 compute-0 python3.9[121816]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:43:50 compute-0 sudo[121814]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 11:43:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:43:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 11:43:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:43:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:43:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:43:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:43:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:43:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:43:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:43:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:43:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:43:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 26 11:43:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:43:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:43:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:43:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 11:43:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:43:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 11:43:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:43:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:43:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:43:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 11:43:50 compute-0 sudo[121966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-desiccsgadndsgfpbdshajmxqcubflee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157430.5372784-82-229779591523116/AnsiballZ_stat.py'
Nov 26 11:43:50 compute-0 sudo[121966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:43:50 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Nov 26 11:43:50 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Nov 26 11:43:50 compute-0 ceph-mon[74928]: pgmap v258: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:43:50 compute-0 ceph-mon[74928]: 8.1c scrub starts
Nov 26 11:43:50 compute-0 ceph-mon[74928]: 11.13 scrub starts
Nov 26 11:43:50 compute-0 ceph-mon[74928]: 8.1c scrub ok
Nov 26 11:43:50 compute-0 ceph-mon[74928]: 11.13 scrub ok
Nov 26 11:43:50 compute-0 python3.9[121968]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:43:50 compute-0 sudo[121966]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:50 compute-0 sudo[122044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njzpzmfsolbgkehsttqdatxwkqmrgnru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157430.5372784-82-229779591523116/AnsiballZ_file.py'
Nov 26 11:43:50 compute-0 sudo[122044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:43:51 compute-0 python3.9[122046]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:43:51 compute-0 sudo[122044]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:51 compute-0 sudo[122196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajyuofnevfonlhcwkmmmhpzftrsqokci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157431.272231-105-149158737591102/AnsiballZ_file.py'
Nov 26 11:43:51 compute-0 sudo[122196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:43:51 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v259: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:43:51 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:43:51 compute-0 python3.9[122198]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:43:51 compute-0 sudo[122196]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:51 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Nov 26 11:43:51 compute-0 ceph-mon[74928]: 6.8 scrub starts
Nov 26 11:43:51 compute-0 ceph-mon[74928]: 6.8 scrub ok
Nov 26 11:43:51 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Nov 26 11:43:51 compute-0 sudo[122348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvjlkcnmnrscudwnpilhyasgkuvffhrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157431.711283-113-190515703340009/AnsiballZ_stat.py'
Nov 26 11:43:51 compute-0 sudo[122348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:43:52 compute-0 python3.9[122350]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:43:52 compute-0 sudo[122348]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:52 compute-0 sudo[122426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkrgsvjbytcygclwtoaelbazsqsftbhg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157431.711283-113-190515703340009/AnsiballZ_file.py'
Nov 26 11:43:52 compute-0 sudo[122426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:43:52 compute-0 python3.9[122428]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:43:52 compute-0 sudo[122426]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:52 compute-0 sudo[122578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpnwhiuupbtczechlismbnmudpqhjmtf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157432.4589584-125-249589334753218/AnsiballZ_stat.py'
Nov 26 11:43:52 compute-0 sudo[122578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:43:52 compute-0 ceph-mon[74928]: pgmap v259: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:43:52 compute-0 ceph-mon[74928]: 11.16 scrub starts
Nov 26 11:43:52 compute-0 ceph-mon[74928]: 11.16 scrub ok
Nov 26 11:43:52 compute-0 python3.9[122580]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:43:52 compute-0 sudo[122578]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:52 compute-0 sudo[122656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtqtkhytlddhecqlbwohdyeroumtidgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157432.4589584-125-249589334753218/AnsiballZ_file.py'
Nov 26 11:43:52 compute-0 sudo[122656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:43:53 compute-0 python3.9[122658]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:43:53 compute-0 sudo[122656]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:53 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v260: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:43:53 compute-0 sudo[122808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcvsrljmrfmgejelocxbqfevqryekwgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157433.2044623-137-114743348084905/AnsiballZ_systemd.py'
Nov 26 11:43:53 compute-0 sudo[122808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:43:53 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 11.1d deep-scrub starts
Nov 26 11:43:53 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 11.1d deep-scrub ok
Nov 26 11:43:53 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 9.e scrub starts
Nov 26 11:43:53 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 9.e scrub ok
Nov 26 11:43:53 compute-0 python3.9[122810]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 11:43:53 compute-0 systemd[1]: Reloading.
Nov 26 11:43:53 compute-0 systemd-rc-local-generator[122829]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:43:53 compute-0 systemd-sysv-generator[122834]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:43:54 compute-0 sudo[122808]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:54 compute-0 sudo[122997]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqacejtbhdocinqcraxizfnbpfujjvoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157434.2423575-145-253357777976195/AnsiballZ_stat.py'
Nov 26 11:43:54 compute-0 sudo[122997]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:43:54 compute-0 python3.9[122999]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:43:54 compute-0 sudo[122997]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:54 compute-0 sudo[123075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afphlumefzveqzyglmfhjsmbmutyfeec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157434.2423575-145-253357777976195/AnsiballZ_file.py'
Nov 26 11:43:54 compute-0 sudo[123075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:43:54 compute-0 ceph-mon[74928]: pgmap v260: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:43:54 compute-0 ceph-mon[74928]: 11.1d deep-scrub starts
Nov 26 11:43:54 compute-0 ceph-mon[74928]: 11.1d deep-scrub ok
Nov 26 11:43:54 compute-0 ceph-mon[74928]: 9.e scrub starts
Nov 26 11:43:54 compute-0 ceph-mon[74928]: 9.e scrub ok
Nov 26 11:43:54 compute-0 python3.9[123077]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:43:54 compute-0 sudo[123075]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:55 compute-0 sudo[123227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igzjvneiwbihepkcoczfldytsjjbvtiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157435.017365-157-121346238970713/AnsiballZ_stat.py'
Nov 26 11:43:55 compute-0 sudo[123227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:43:55 compute-0 python3.9[123229]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:43:55 compute-0 sudo[123227]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:55 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v261: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:43:55 compute-0 sudo[123305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhsoraewakcpouvqgwgylvohvbvveolq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157435.017365-157-121346238970713/AnsiballZ_file.py'
Nov 26 11:43:55 compute-0 sudo[123305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:43:55 compute-0 python3.9[123307]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:43:55 compute-0 sudo[123305]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:55 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Nov 26 11:43:55 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Nov 26 11:43:55 compute-0 sudo[123457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmqilafvkyyqfldhdjtnwttcwkddmytc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157435.798851-169-60896797634614/AnsiballZ_systemd.py'
Nov 26 11:43:55 compute-0 sudo[123457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:43:56 compute-0 python3.9[123459]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 11:43:56 compute-0 systemd[1]: Reloading.
Nov 26 11:43:56 compute-0 systemd-rc-local-generator[123480]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:43:56 compute-0 systemd-sysv-generator[123483]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:43:56 compute-0 systemd[1]: Starting Create netns directory...
Nov 26 11:43:56 compute-0 systemd[76422]: Created slice User Background Tasks Slice.
Nov 26 11:43:56 compute-0 systemd[76422]: Starting Cleanup of User's Temporary Files and Directories...
Nov 26 11:43:56 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 26 11:43:56 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 26 11:43:56 compute-0 systemd[1]: Finished Create netns directory.
Nov 26 11:43:56 compute-0 systemd[76422]: Finished Cleanup of User's Temporary Files and Directories.
Nov 26 11:43:56 compute-0 sudo[123457]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:56 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:43:56 compute-0 ceph-mon[74928]: pgmap v261: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:43:56 compute-0 ceph-mon[74928]: 10.7 scrub starts
Nov 26 11:43:56 compute-0 ceph-mon[74928]: 10.7 scrub ok
Nov 26 11:43:56 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Nov 26 11:43:56 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Nov 26 11:43:57 compute-0 python3.9[123651]: ansible-ansible.builtin.service_facts Invoked
Nov 26 11:43:57 compute-0 network[123668]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 26 11:43:57 compute-0 network[123669]: 'network-scripts' will be removed from distribution in near future.
Nov 26 11:43:57 compute-0 network[123670]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 26 11:43:57 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v262: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:43:57 compute-0 ceph-mon[74928]: 6.1 scrub starts
Nov 26 11:43:57 compute-0 ceph-mon[74928]: 6.1 scrub ok
Nov 26 11:43:58 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Nov 26 11:43:58 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Nov 26 11:43:58 compute-0 ceph-mon[74928]: pgmap v262: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:43:59 compute-0 sudo[123930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkgcsmffofyharkiyyptrhqrsikyintt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157439.184334-195-132785978803457/AnsiballZ_stat.py'
Nov 26 11:43:59 compute-0 sudo[123930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:43:59 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v263: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:43:59 compute-0 python3.9[123932]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:43:59 compute-0 sudo[123930]: pam_unix(sudo:session): session closed for user root
Nov 26 11:43:59 compute-0 sudo[124008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kznfdumyhclwgzvbqbkfldwgvdjallmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157439.184334-195-132785978803457/AnsiballZ_file.py'
Nov 26 11:43:59 compute-0 sudo[124008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:43:59 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Nov 26 11:43:59 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Nov 26 11:43:59 compute-0 ceph-mon[74928]: 9.6 scrub starts
Nov 26 11:43:59 compute-0 ceph-mon[74928]: 9.6 scrub ok
Nov 26 11:43:59 compute-0 python3.9[124010]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:43:59 compute-0 sudo[124008]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:00 compute-0 sudo[124160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhndnvvjnwlkvgyujarmtsdcxznicpeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157440.0215364-208-263039018942528/AnsiballZ_file.py'
Nov 26 11:44:00 compute-0 sudo[124160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:44:00 compute-0 python3.9[124162]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:44:00 compute-0 sudo[124160]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:00 compute-0 sudo[124312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twzfbpkyzewobmxopendjdhsslhsorts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157440.4881077-216-237608356219512/AnsiballZ_stat.py'
Nov 26 11:44:00 compute-0 sudo[124312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:44:00 compute-0 ceph-mon[74928]: pgmap v263: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:44:00 compute-0 ceph-mon[74928]: 9.7 scrub starts
Nov 26 11:44:00 compute-0 ceph-mon[74928]: 9.7 scrub ok
Nov 26 11:44:00 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Nov 26 11:44:00 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Nov 26 11:44:00 compute-0 python3.9[124314]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:44:00 compute-0 sudo[124312]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:00 compute-0 sudo[124390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwoxffbubjcwqnyvmsnuexlgkkqervef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157440.4881077-216-237608356219512/AnsiballZ_file.py'
Nov 26 11:44:00 compute-0 sudo[124390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:44:01 compute-0 python3.9[124392]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:44:01 compute-0 sudo[124390]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:01 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v264: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:44:01 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:44:01 compute-0 sudo[124542]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfkesbrtbpektkuwfmwahjllehhktjhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157441.3513346-231-216274907953285/AnsiballZ_timezone.py'
Nov 26 11:44:01 compute-0 sudo[124542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:44:01 compute-0 ceph-mon[74928]: 10.4 scrub starts
Nov 26 11:44:01 compute-0 ceph-mon[74928]: 10.4 scrub ok
Nov 26 11:44:01 compute-0 python3.9[124544]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 26 11:44:01 compute-0 systemd[1]: Starting Time & Date Service...
Nov 26 11:44:01 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Nov 26 11:44:01 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Nov 26 11:44:01 compute-0 systemd[1]: Started Time & Date Service.
Nov 26 11:44:01 compute-0 sudo[124542]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:02 compute-0 sudo[124698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvomlinbzwsgufggbrcundjrxzjhuhmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157442.0910451-240-142683065484533/AnsiballZ_file.py'
Nov 26 11:44:02 compute-0 sudo[124698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:44:02 compute-0 python3.9[124700]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:44:02 compute-0 sudo[124698]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 9.f scrub starts
Nov 26 11:44:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 9.f scrub ok
Nov 26 11:44:02 compute-0 sudo[124850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhifsrzyivbseiegsxhzpanbbymjdgtg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157442.5559645-248-127730213479968/AnsiballZ_stat.py'
Nov 26 11:44:02 compute-0 sudo[124850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:44:02 compute-0 ceph-mon[74928]: pgmap v264: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:44:02 compute-0 ceph-mon[74928]: 10.19 scrub starts
Nov 26 11:44:02 compute-0 ceph-mon[74928]: 10.19 scrub ok
Nov 26 11:44:02 compute-0 python3.9[124852]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:44:02 compute-0 sudo[124850]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:03 compute-0 sudo[124928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlekymsumymfmcrugeorvfrxjckfhmzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157442.5559645-248-127730213479968/AnsiballZ_file.py'
Nov 26 11:44:03 compute-0 sudo[124928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:44:03 compute-0 python3.9[124930]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:44:03 compute-0 sudo[124928]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:03 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v265: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:44:03 compute-0 sudo[125080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-watwpsvuwuncyzvinoxqoezywwpwodqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157443.3531537-260-266640364361373/AnsiballZ_stat.py'
Nov 26 11:44:03 compute-0 sudo[125080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:44:03 compute-0 python3.9[125082]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:44:03 compute-0 sudo[125080]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:03 compute-0 ceph-mon[74928]: 9.f scrub starts
Nov 26 11:44:03 compute-0 ceph-mon[74928]: 9.f scrub ok
Nov 26 11:44:03 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 10.8 deep-scrub starts
Nov 26 11:44:03 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 10.8 deep-scrub ok
Nov 26 11:44:03 compute-0 sudo[125158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkvopzvwaewrglupwafoqnzqoexjlbpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157443.3531537-260-266640364361373/AnsiballZ_file.py'
Nov 26 11:44:03 compute-0 sudo[125158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:44:03 compute-0 python3.9[125160]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.zp6bueyb recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:44:04 compute-0 sudo[125158]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:04 compute-0 sudo[125310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhrwxspbubkuexzyflcybpvouuqpnbuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157444.1123476-272-196188718385207/AnsiballZ_stat.py'
Nov 26 11:44:04 compute-0 sudo[125310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:44:04 compute-0 python3.9[125312]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:44:04 compute-0 sudo[125310]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:04 compute-0 sudo[125388]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oieiqdzpwzacrqndrsgqmlnnjxiaynju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157444.1123476-272-196188718385207/AnsiballZ_file.py'
Nov 26 11:44:04 compute-0 sudo[125388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:44:04 compute-0 python3.9[125390]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:44:04 compute-0 sudo[125388]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:04 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Nov 26 11:44:04 compute-0 ceph-mon[74928]: pgmap v265: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:44:04 compute-0 ceph-mon[74928]: 10.8 deep-scrub starts
Nov 26 11:44:04 compute-0 ceph-mon[74928]: 10.8 deep-scrub ok
Nov 26 11:44:04 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Nov 26 11:44:04 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 10.b scrub starts
Nov 26 11:44:04 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 10.b scrub ok
Nov 26 11:44:05 compute-0 sudo[125540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trjdjllpmumprhaxcgworaftmcrdfkte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157444.8869627-285-115798185639295/AnsiballZ_command.py'
Nov 26 11:44:05 compute-0 sudo[125540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:44:05 compute-0 python3.9[125542]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:44:05 compute-0 sudo[125540]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:05 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v266: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:44:05 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Nov 26 11:44:05 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Nov 26 11:44:05 compute-0 sudo[125693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvworylrxmqwnxwpcigqumtfkqrdsqwg ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764157445.4511375-293-124051667121412/AnsiballZ_edpm_nftables_from_files.py'
Nov 26 11:44:05 compute-0 sudo[125693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:44:05 compute-0 ceph-mon[74928]: 10.1 scrub starts
Nov 26 11:44:05 compute-0 ceph-mon[74928]: 10.1 scrub ok
Nov 26 11:44:05 compute-0 ceph-mon[74928]: 10.b scrub starts
Nov 26 11:44:05 compute-0 ceph-mon[74928]: 10.b scrub ok
Nov 26 11:44:05 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Nov 26 11:44:05 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Nov 26 11:44:05 compute-0 python3[125695]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 26 11:44:05 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 10.13 deep-scrub starts
Nov 26 11:44:05 compute-0 sudo[125693]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:05 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 10.13 deep-scrub ok
Nov 26 11:44:06 compute-0 sudo[125845]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbzcocosvigsprmmcddozqzzznagbinl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157446.0298984-301-187965741905182/AnsiballZ_stat.py'
Nov 26 11:44:06 compute-0 sudo[125845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:44:06 compute-0 python3.9[125847]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:44:06 compute-0 sudo[125845]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:06 compute-0 sudo[125923]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehxysfeclojsjyakolsfxnlrfskgtouj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157446.0298984-301-187965741905182/AnsiballZ_file.py'
Nov 26 11:44:06 compute-0 sudo[125923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:44:06 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:44:06 compute-0 python3.9[125925]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:44:06 compute-0 sudo[125923]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:06 compute-0 ceph-mon[74928]: pgmap v266: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:44:06 compute-0 ceph-mon[74928]: 9.17 scrub starts
Nov 26 11:44:06 compute-0 ceph-mon[74928]: 9.17 scrub ok
Nov 26 11:44:06 compute-0 ceph-mon[74928]: 10.15 scrub starts
Nov 26 11:44:06 compute-0 ceph-mon[74928]: 10.15 scrub ok
Nov 26 11:44:06 compute-0 ceph-mon[74928]: 10.13 deep-scrub starts
Nov 26 11:44:06 compute-0 ceph-mon[74928]: 10.13 deep-scrub ok
Nov 26 11:44:06 compute-0 sudo[126075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evpvggecbjmqunfxlaewkpzrxcocvsxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157446.7952223-313-90635395877894/AnsiballZ_stat.py'
Nov 26 11:44:06 compute-0 sudo[126075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:44:07 compute-0 python3.9[126077]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:44:07 compute-0 sudo[126075]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:07 compute-0 sudo[126153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enylnbpuorxzubloivnjcsakjtajezia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157446.7952223-313-90635395877894/AnsiballZ_file.py'
Nov 26 11:44:07 compute-0 sudo[126153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:44:07 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v267: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:44:07 compute-0 python3.9[126155]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:44:07 compute-0 sudo[126153]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:07 compute-0 sudo[126305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twchbcqpnvkfmtnsibljwgoufghcsssf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157447.5799818-325-218629340681909/AnsiballZ_stat.py'
Nov 26 11:44:07 compute-0 sudo[126305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:44:07 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 10.e scrub starts
Nov 26 11:44:07 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 10.e scrub ok
Nov 26 11:44:07 compute-0 python3.9[126307]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:44:07 compute-0 sudo[126305]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:08 compute-0 sudo[126383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sykfesdvkvbrrkyvvsomedzifrvdmtif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157447.5799818-325-218629340681909/AnsiballZ_file.py'
Nov 26 11:44:08 compute-0 sudo[126383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:44:08 compute-0 python3.9[126385]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:44:08 compute-0 sudo[126383]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:08 compute-0 sudo[126535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzvslujmrkwdjroczlswaqacfpmktrho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157448.344692-337-176998587269215/AnsiballZ_stat.py'
Nov 26 11:44:08 compute-0 sudo[126535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:44:08 compute-0 python3.9[126537]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:44:08 compute-0 sudo[126535]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:08 compute-0 ceph-mon[74928]: pgmap v267: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:44:08 compute-0 ceph-mon[74928]: 10.e scrub starts
Nov 26 11:44:08 compute-0 ceph-mon[74928]: 10.e scrub ok
Nov 26 11:44:08 compute-0 sudo[126613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdwwcfqdlluhuplmagvyxaxzumhbdbtz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157448.344692-337-176998587269215/AnsiballZ_file.py'
Nov 26 11:44:08 compute-0 sudo[126613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:44:08 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Nov 26 11:44:08 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Nov 26 11:44:08 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Nov 26 11:44:08 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Nov 26 11:44:09 compute-0 python3.9[126615]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:44:09 compute-0 sudo[126613]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:09 compute-0 sudo[126765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrcgkeosheovrirvgvdluoftmqwwdczm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157449.1508677-349-80801845779150/AnsiballZ_stat.py'
Nov 26 11:44:09 compute-0 sudo[126765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:44:09 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v268: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:44:09 compute-0 python3.9[126767]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:44:09 compute-0 sudo[126765]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:09 compute-0 sudo[126843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iljvngtxzgptriynzuefekvkosucaicj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157449.1508677-349-80801845779150/AnsiballZ_file.py'
Nov 26 11:44:09 compute-0 sudo[126843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:44:09 compute-0 ceph-mon[74928]: 10.16 scrub starts
Nov 26 11:44:09 compute-0 ceph-mon[74928]: 10.16 scrub ok
Nov 26 11:44:09 compute-0 ceph-mon[74928]: 10.12 scrub starts
Nov 26 11:44:09 compute-0 ceph-mon[74928]: 10.12 scrub ok
Nov 26 11:44:09 compute-0 python3.9[126845]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:44:09 compute-0 sudo[126843]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:10 compute-0 sudo[126995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nztdiudhgaprqtbnnmxbjmdednrmweva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157449.9933984-362-55048742370195/AnsiballZ_command.py'
Nov 26 11:44:10 compute-0 sudo[126995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:44:10 compute-0 python3.9[126997]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:44:10 compute-0 sudo[126995]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:10 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 9.8 deep-scrub starts
Nov 26 11:44:10 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 9.8 deep-scrub ok
Nov 26 11:44:10 compute-0 sudo[127150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctxzjkvuiilufoyiwcjbijdkocyjdcfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157450.455466-370-9366194435845/AnsiballZ_blockinfile.py'
Nov 26 11:44:10 compute-0 sudo[127150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:44:10 compute-0 ceph-mon[74928]: pgmap v268: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:44:10 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 10.9 deep-scrub starts
Nov 26 11:44:10 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 10.9 deep-scrub ok
Nov 26 11:44:10 compute-0 python3.9[127152]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:44:10 compute-0 sudo[127150]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:10 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Nov 26 11:44:10 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Nov 26 11:44:11 compute-0 sudo[127302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtllavbbzdphcfdbmkvhphiowddcgjdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157451.0898001-379-69116482519610/AnsiballZ_file.py'
Nov 26 11:44:11 compute-0 sudo[127302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:44:11 compute-0 python3.9[127304]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:44:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:44:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:44:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:44:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:44:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:44:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:44:11 compute-0 sudo[127302]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:11 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v269: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:44:11 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:44:11 compute-0 sudo[127454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzwjgecqtdricrjeszrvbqoiydjmqovd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157451.5276043-379-230709810930158/AnsiballZ_file.py'
Nov 26 11:44:11 compute-0 sudo[127454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:44:11 compute-0 ceph-mon[74928]: 9.8 deep-scrub starts
Nov 26 11:44:11 compute-0 ceph-mon[74928]: 9.8 deep-scrub ok
Nov 26 11:44:11 compute-0 ceph-mon[74928]: 10.9 deep-scrub starts
Nov 26 11:44:11 compute-0 ceph-mon[74928]: 10.9 deep-scrub ok
Nov 26 11:44:11 compute-0 ceph-mon[74928]: 10.11 scrub starts
Nov 26 11:44:11 compute-0 ceph-mon[74928]: 10.11 scrub ok
Nov 26 11:44:11 compute-0 python3.9[127456]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:44:11 compute-0 sudo[127454]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:12 compute-0 sudo[127606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqixigxnimhpfdxjwyoczcxddfgugocs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157451.9961994-394-205075505257402/AnsiballZ_mount.py'
Nov 26 11:44:12 compute-0 sudo[127606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:44:12 compute-0 python3.9[127608]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 26 11:44:12 compute-0 sudo[127606]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:12 compute-0 sudo[127758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmouwpprkgwtkhxrzlzwuqfeoewlgnsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157452.6047735-394-237971912276508/AnsiballZ_mount.py'
Nov 26 11:44:12 compute-0 sudo[127758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:44:12 compute-0 ceph-mon[74928]: pgmap v269: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:44:12 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Nov 26 11:44:12 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Nov 26 11:44:12 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Nov 26 11:44:12 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Nov 26 11:44:12 compute-0 python3.9[127760]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 26 11:44:12 compute-0 sudo[127758]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:13 compute-0 sshd-session[120645]: Connection closed by 192.168.122.30 port 45452
Nov 26 11:44:13 compute-0 sshd-session[120642]: pam_unix(sshd:session): session closed for user zuul
Nov 26 11:44:13 compute-0 systemd[1]: session-39.scope: Deactivated successfully.
Nov 26 11:44:13 compute-0 systemd[1]: session-39.scope: Consumed 19.659s CPU time.
Nov 26 11:44:13 compute-0 systemd-logind[744]: Session 39 logged out. Waiting for processes to exit.
Nov 26 11:44:13 compute-0 systemd-logind[744]: Removed session 39.
Nov 26 11:44:13 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v270: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:44:13 compute-0 ceph-mon[74928]: 10.17 scrub starts
Nov 26 11:44:13 compute-0 ceph-mon[74928]: 10.17 scrub ok
Nov 26 11:44:13 compute-0 ceph-mon[74928]: 10.10 scrub starts
Nov 26 11:44:13 compute-0 ceph-mon[74928]: 10.10 scrub ok
Nov 26 11:44:13 compute-0 ceph-mon[74928]: pgmap v270: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:44:14 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Nov 26 11:44:14 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Nov 26 11:44:15 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v271: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:44:15 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Nov 26 11:44:15 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Nov 26 11:44:16 compute-0 ceph-mon[74928]: 10.1a scrub starts
Nov 26 11:44:16 compute-0 ceph-mon[74928]: 10.1a scrub ok
Nov 26 11:44:16 compute-0 ceph-mon[74928]: pgmap v271: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:44:16 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:44:17 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v272: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:44:17 compute-0 ceph-mon[74928]: 10.6 scrub starts
Nov 26 11:44:17 compute-0 ceph-mon[74928]: 10.6 scrub ok
Nov 26 11:44:17 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Nov 26 11:44:17 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Nov 26 11:44:17 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 10.f scrub starts
Nov 26 11:44:17 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 10.f scrub ok
Nov 26 11:44:18 compute-0 ceph-mon[74928]: pgmap v272: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:44:18 compute-0 ceph-mon[74928]: 9.18 scrub starts
Nov 26 11:44:18 compute-0 ceph-mon[74928]: 9.18 scrub ok
Nov 26 11:44:18 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Nov 26 11:44:18 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Nov 26 11:44:18 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Nov 26 11:44:18 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Nov 26 11:44:18 compute-0 sshd-session[127786]: Accepted publickey for zuul from 192.168.122.30 port 40992 ssh2: ECDSA SHA256:Ri1FttUDYah1mI7cQP4QF8tE4Jqe/KzhJvhCRDggR5A
Nov 26 11:44:18 compute-0 systemd-logind[744]: New session 40 of user zuul.
Nov 26 11:44:18 compute-0 systemd[1]: Started Session 40 of User zuul.
Nov 26 11:44:18 compute-0 sshd-session[127786]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 26 11:44:19 compute-0 sudo[127939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxubjcefjpesvpgiudfhbjnhekwhhxuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157459.0143378-16-208811163568978/AnsiballZ_tempfile.py'
Nov 26 11:44:19 compute-0 sudo[127939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:44:19 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v273: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:44:19 compute-0 python3.9[127941]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 26 11:44:19 compute-0 sudo[127939]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:19 compute-0 ceph-mon[74928]: 10.f scrub starts
Nov 26 11:44:19 compute-0 ceph-mon[74928]: 10.f scrub ok
Nov 26 11:44:19 compute-0 ceph-mon[74928]: 10.1e scrub starts
Nov 26 11:44:19 compute-0 ceph-mon[74928]: 10.1e scrub ok
Nov 26 11:44:19 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 9.c scrub starts
Nov 26 11:44:19 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 9.c scrub ok
Nov 26 11:44:19 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Nov 26 11:44:19 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Nov 26 11:44:19 compute-0 sudo[128091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vurwnohhlwdnraujpcmfkbyczbfpjgpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157459.6010644-28-265469663490283/AnsiballZ_stat.py'
Nov 26 11:44:19 compute-0 sudo[128091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:44:20 compute-0 python3.9[128093]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 11:44:20 compute-0 sudo[128091]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:20 compute-0 sudo[128245]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsnasojmscjgazsibzosqdwuutoggzgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157460.1606283-36-45188840028125/AnsiballZ_slurp.py'
Nov 26 11:44:20 compute-0 sudo[128245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:44:20 compute-0 ceph-mon[74928]: 10.2 scrub starts
Nov 26 11:44:20 compute-0 ceph-mon[74928]: 10.2 scrub ok
Nov 26 11:44:20 compute-0 ceph-mon[74928]: pgmap v273: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:44:20 compute-0 ceph-mon[74928]: 9.c scrub starts
Nov 26 11:44:20 compute-0 ceph-mon[74928]: 9.c scrub ok
Nov 26 11:44:20 compute-0 python3.9[128247]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Nov 26 11:44:20 compute-0 sudo[128245]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:20 compute-0 sudo[128397]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtgheveskgjsdboxbhjiucdkwtfxmvod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157460.7382708-44-201311850101920/AnsiballZ_stat.py'
Nov 26 11:44:20 compute-0 sudo[128397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:44:21 compute-0 python3.9[128399]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.khc6li1l follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:44:21 compute-0 sudo[128397]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:21 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v274: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:44:21 compute-0 sudo[128522]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvfbliteoistuhmsvocbvwymsxziifsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157460.7382708-44-201311850101920/AnsiballZ_copy.py'
Nov 26 11:44:21 compute-0 sudo[128522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:44:21 compute-0 ceph-mon[74928]: 10.14 scrub starts
Nov 26 11:44:21 compute-0 ceph-mon[74928]: 10.14 scrub ok
Nov 26 11:44:21 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:44:21 compute-0 python3.9[128524]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.khc6li1l mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764157460.7382708-44-201311850101920/.source.khc6li1l _original_basename=.jd8qg58d follow=False checksum=90db9d137e04cff6b975346e222d508d45d38f30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:44:21 compute-0 sudo[128522]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:21 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 6.f scrub starts
Nov 26 11:44:21 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 6.f scrub ok
Nov 26 11:44:22 compute-0 sudo[128674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnruzjfrnzmojiwhpqhhdsftltgheask ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157461.7363026-59-79014078746895/AnsiballZ_setup.py'
Nov 26 11:44:22 compute-0 sudo[128674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:44:22 compute-0 python3.9[128676]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 11:44:22 compute-0 sudo[128674]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:22 compute-0 ceph-mon[74928]: pgmap v274: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:44:22 compute-0 ceph-mon[74928]: 6.f scrub starts
Nov 26 11:44:22 compute-0 ceph-mon[74928]: 6.f scrub ok
Nov 26 11:44:22 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Nov 26 11:44:22 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Nov 26 11:44:22 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 6.e scrub starts
Nov 26 11:44:22 compute-0 sudo[128826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfnwwjewsdelbqpvkwxeqtqlomwifcws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157462.5537384-68-90560834093565/AnsiballZ_blockinfile.py'
Nov 26 11:44:22 compute-0 sudo[128826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:44:22 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 6.e scrub ok
Nov 26 11:44:23 compute-0 python3.9[128828]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCdu1hnIRd4z8G3I/mWuSheNmPiy5JmJewQcNJK8Xnh8+3RkH5Ir5TjoWjcBbus7LjOWI2vB4aZJsvabWyo7XjKyNNxzQ7T+UUtFQHHtIRypSfnRd9wdyQCvNrlkMJ7DLKuVhyc+WAxU7ggNsuqulmsir7MeF5F5u7PpYIFEz55Zw1rMt0Z3DfE7mQzK0SkfNPlKPjVcnsomTnv/2gusmTD/r89MrE1qZVfvp6hlUFt+tTSGrBDlY7nlFn/QezWHpVltfe60IjjlT4ElFFphHl9gsTZX+05KYpO/Uebsxd+fdVUMeE7mHasJ85ZtnVr1e4XfjGNZXAbwMzGT4AsuKukBD2hHY9N2iY2muRygKVb2Dy9T/6KNr7UESlajeu4d+dzV38+cpl+yX0UJifpxrziOs9FoRtXXtvHMgBqhEeMPwM3JVmkHRYuVTgZmkT5hp+701rg/kUmrtMORp4Pz+cPNEf9bBh3MolxoX2ywMemm+X4pQ2q0SkObR2wVPDwIuM=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKJRMprkU6bQh71XlfALiaL1rgqAMYtwVhOv3RB2wXcv
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFDitpoLOaesAk4S4uxyeXlnPZ6G1ds/IGaDtcgfENrpDvSwe8nWJ+j940dFwDP4H7TYghuxWGo6MCtAEhXya7c=
                                              create=True mode=0644 path=/tmp/ansible.khc6li1l state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:44:23 compute-0 sudo[128826]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:23 compute-0 sudo[128978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrbodjkvpgmnhjchuooagxdxbchrclgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157463.1321688-76-235882484522890/AnsiballZ_command.py'
Nov 26 11:44:23 compute-0 sudo[128978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:44:23 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v275: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:44:23 compute-0 ceph-mon[74928]: 9.13 scrub starts
Nov 26 11:44:23 compute-0 ceph-mon[74928]: 9.13 scrub ok
Nov 26 11:44:23 compute-0 ceph-mon[74928]: 6.e scrub starts
Nov 26 11:44:23 compute-0 ceph-mon[74928]: 6.e scrub ok
Nov 26 11:44:23 compute-0 python3.9[128980]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.khc6li1l' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:44:23 compute-0 sudo[128978]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:23 compute-0 sudo[129132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stqtiemzdvvwffetjlfafxmjiywsychw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157463.693866-84-62218802843127/AnsiballZ_file.py'
Nov 26 11:44:23 compute-0 sudo[129132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:44:24 compute-0 python3.9[129134]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.khc6li1l state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:44:24 compute-0 sudo[129132]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:24 compute-0 sshd-session[127789]: Connection closed by 192.168.122.30 port 40992
Nov 26 11:44:24 compute-0 sshd-session[127786]: pam_unix(sshd:session): session closed for user zuul
Nov 26 11:44:24 compute-0 systemd[1]: session-40.scope: Deactivated successfully.
Nov 26 11:44:24 compute-0 systemd[1]: session-40.scope: Consumed 3.468s CPU time.
Nov 26 11:44:24 compute-0 systemd-logind[744]: Session 40 logged out. Waiting for processes to exit.
Nov 26 11:44:24 compute-0 systemd-logind[744]: Removed session 40.
Nov 26 11:44:24 compute-0 ceph-mon[74928]: pgmap v275: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:44:25 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v276: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:44:25 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 9.19 deep-scrub starts
Nov 26 11:44:25 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 9.19 deep-scrub ok
Nov 26 11:44:25 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Nov 26 11:44:25 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Nov 26 11:44:26 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:44:26 compute-0 ceph-mon[74928]: pgmap v276: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:44:26 compute-0 ceph-mon[74928]: 9.19 deep-scrub starts
Nov 26 11:44:26 compute-0 ceph-mon[74928]: 9.19 deep-scrub ok
Nov 26 11:44:26 compute-0 ceph-mon[74928]: 6.6 scrub starts
Nov 26 11:44:26 compute-0 ceph-mon[74928]: 6.6 scrub ok
Nov 26 11:44:27 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v277: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:44:27 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 9.d scrub starts
Nov 26 11:44:28 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 9.d scrub ok
Nov 26 11:44:28 compute-0 ceph-mon[74928]: pgmap v277: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:44:28 compute-0 ceph-mon[74928]: 9.d scrub starts
Nov 26 11:44:28 compute-0 ceph-mon[74928]: 9.d scrub ok
Nov 26 11:44:29 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v278: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:44:29 compute-0 sshd-session[129160]: Accepted publickey for zuul from 192.168.122.30 port 52854 ssh2: ECDSA SHA256:Ri1FttUDYah1mI7cQP4QF8tE4Jqe/KzhJvhCRDggR5A
Nov 26 11:44:29 compute-0 systemd-logind[744]: New session 41 of user zuul.
Nov 26 11:44:29 compute-0 systemd[1]: Started Session 41 of User zuul.
Nov 26 11:44:29 compute-0 sshd-session[129160]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 26 11:44:30 compute-0 python3.9[129313]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 11:44:30 compute-0 ceph-mon[74928]: pgmap v278: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:44:31 compute-0 sudo[129467]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iycitaaawzzhyqvjsqhptaublvfwemnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157470.6099572-32-257303235211726/AnsiballZ_systemd.py'
Nov 26 11:44:31 compute-0 sudo[129467]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:44:31 compute-0 python3.9[129469]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 26 11:44:31 compute-0 sudo[129467]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:31 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v279: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:44:31 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:44:31 compute-0 sudo[129621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbfhbazelbjhfcvydobkulmelrasrpfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157471.4028037-40-114796792207612/AnsiballZ_systemd.py'
Nov 26 11:44:31 compute-0 sudo[129621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:44:31 compute-0 sudo[129624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:44:31 compute-0 sudo[129624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:44:31 compute-0 sudo[129624]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:31 compute-0 sudo[129649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:44:31 compute-0 sudo[129649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:44:31 compute-0 sudo[129649]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:31 compute-0 sudo[129674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:44:31 compute-0 sudo[129674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:44:31 compute-0 sudo[129674]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:31 compute-0 sudo[129699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 26 11:44:31 compute-0 sudo[129699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:44:31 compute-0 python3.9[129623]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 11:44:31 compute-0 sudo[129621]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:31 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 26 11:44:32 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Nov 26 11:44:32 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Nov 26 11:44:32 compute-0 sudo[129699]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:32 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 11:44:32 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:44:32 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 11:44:32 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 11:44:32 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 11:44:32 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:44:32 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 51dcbf60-b907-40d0-bc55-44bb2333c2c2 does not exist
Nov 26 11:44:32 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 382832a2-46f0-4df1-b73b-815e8feacd14 does not exist
Nov 26 11:44:32 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev bbdb5ed4-2275-4fbf-b34b-2421ec8cf789 does not exist
Nov 26 11:44:32 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 11:44:32 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 11:44:32 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 11:44:32 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 11:44:32 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 11:44:32 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:44:32 compute-0 sudo[129840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:44:32 compute-0 sudo[129840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:44:32 compute-0 sudo[129840]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:32 compute-0 sudo[129885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:44:32 compute-0 sudo[129885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:44:32 compute-0 sudo[129885]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:32 compute-0 sudo[129973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azcokfqqpqknfxciimqtshzoxpnyodym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157471.9863603-49-193608545294684/AnsiballZ_command.py'
Nov 26 11:44:32 compute-0 sudo[129973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:44:32 compute-0 sudo[129938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:44:32 compute-0 sudo[129938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:44:32 compute-0 sudo[129938]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:32 compute-0 sudo[129983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 26 11:44:32 compute-0 sudo[129983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:44:32 compute-0 python3.9[129980]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:44:32 compute-0 sudo[129973]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:32 compute-0 podman[130064]: 2025-11-26 11:44:32.529788542 +0000 UTC m=+0.025123569 container create 2cff4c780b22da5cfebc762a83240729996db320d9ad0b349b395f5b6d1147ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_wozniak, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:44:32 compute-0 systemd[1]: Started libpod-conmon-2cff4c780b22da5cfebc762a83240729996db320d9ad0b349b395f5b6d1147ae.scope.
Nov 26 11:44:32 compute-0 ceph-mon[74928]: pgmap v279: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:44:32 compute-0 ceph-mon[74928]: 9.9 scrub starts
Nov 26 11:44:32 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:44:32 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 11:44:32 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:44:32 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 11:44:32 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 11:44:32 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:44:32 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:44:32 compute-0 podman[130064]: 2025-11-26 11:44:32.573842629 +0000 UTC m=+0.069177667 container init 2cff4c780b22da5cfebc762a83240729996db320d9ad0b349b395f5b6d1147ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:44:32 compute-0 podman[130064]: 2025-11-26 11:44:32.578851698 +0000 UTC m=+0.074186726 container start 2cff4c780b22da5cfebc762a83240729996db320d9ad0b349b395f5b6d1147ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_wozniak, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:44:32 compute-0 nice_wozniak[130095]: 167 167
Nov 26 11:44:32 compute-0 systemd[1]: libpod-2cff4c780b22da5cfebc762a83240729996db320d9ad0b349b395f5b6d1147ae.scope: Deactivated successfully.
Nov 26 11:44:32 compute-0 podman[130064]: 2025-11-26 11:44:32.581120436 +0000 UTC m=+0.076455485 container attach 2cff4c780b22da5cfebc762a83240729996db320d9ad0b349b395f5b6d1147ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_wozniak, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:44:32 compute-0 podman[130064]: 2025-11-26 11:44:32.582938024 +0000 UTC m=+0.078273063 container died 2cff4c780b22da5cfebc762a83240729996db320d9ad0b349b395f5b6d1147ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_wozniak, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 26 11:44:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c407c3945fb48e4d3a041ccb8db0e0ed75e8f2a1cb71c380412c990a2f2310c-merged.mount: Deactivated successfully.
Nov 26 11:44:32 compute-0 podman[130064]: 2025-11-26 11:44:32.602336636 +0000 UTC m=+0.097671664 container remove 2cff4c780b22da5cfebc762a83240729996db320d9ad0b349b395f5b6d1147ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_wozniak, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:44:32 compute-0 podman[130064]: 2025-11-26 11:44:32.519239504 +0000 UTC m=+0.014574542 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:44:32 compute-0 systemd[1]: libpod-conmon-2cff4c780b22da5cfebc762a83240729996db320d9ad0b349b395f5b6d1147ae.scope: Deactivated successfully.
Nov 26 11:44:32 compute-0 podman[130151]: 2025-11-26 11:44:32.707502144 +0000 UTC m=+0.025992539 container create 84613a60a483e88d0745ed119b24be735f143f45b143e763d27a9d350d3eed4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_williams, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:44:32 compute-0 systemd[1]: Started libpod-conmon-84613a60a483e88d0745ed119b24be735f143f45b143e763d27a9d350d3eed4f.scope.
Nov 26 11:44:32 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:44:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5510116b83fcc361e47ca336dd2feb8ae8b756d3f7c5fbb7f4cd70646feab31e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:44:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5510116b83fcc361e47ca336dd2feb8ae8b756d3f7c5fbb7f4cd70646feab31e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:44:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5510116b83fcc361e47ca336dd2feb8ae8b756d3f7c5fbb7f4cd70646feab31e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:44:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5510116b83fcc361e47ca336dd2feb8ae8b756d3f7c5fbb7f4cd70646feab31e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:44:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5510116b83fcc361e47ca336dd2feb8ae8b756d3f7c5fbb7f4cd70646feab31e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 11:44:32 compute-0 podman[130151]: 2025-11-26 11:44:32.762748764 +0000 UTC m=+0.081239169 container init 84613a60a483e88d0745ed119b24be735f143f45b143e763d27a9d350d3eed4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_williams, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 26 11:44:32 compute-0 podman[130151]: 2025-11-26 11:44:32.767945235 +0000 UTC m=+0.086435620 container start 84613a60a483e88d0745ed119b24be735f143f45b143e763d27a9d350d3eed4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_williams, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:44:32 compute-0 podman[130151]: 2025-11-26 11:44:32.768968255 +0000 UTC m=+0.087458660 container attach 84613a60a483e88d0745ed119b24be735f143f45b143e763d27a9d350d3eed4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_williams, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:44:32 compute-0 podman[130151]: 2025-11-26 11:44:32.69655748 +0000 UTC m=+0.015047884 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:44:32 compute-0 sudo[130242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcfordrkbxckbsegipgiqebusxprmqon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157472.568379-57-216564376114360/AnsiballZ_stat.py'
Nov 26 11:44:32 compute-0 sudo[130242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:44:33 compute-0 python3.9[130244]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 11:44:33 compute-0 sudo[130242]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:33 compute-0 sudo[130410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmerccqwgfldfmxvgqtxadzkyrmdpzfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157473.166947-66-80733417419107/AnsiballZ_file.py'
Nov 26 11:44:33 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v280: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:44:33 compute-0 sudo[130410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:44:33 compute-0 priceless_williams[130164]: --> passed data devices: 0 physical, 3 LVM
Nov 26 11:44:33 compute-0 priceless_williams[130164]: --> relative data size: 1.0
Nov 26 11:44:33 compute-0 priceless_williams[130164]: --> All data devices are unavailable
Nov 26 11:44:33 compute-0 systemd[1]: libpod-84613a60a483e88d0745ed119b24be735f143f45b143e763d27a9d350d3eed4f.scope: Deactivated successfully.
Nov 26 11:44:33 compute-0 ceph-mon[74928]: 9.9 scrub ok
Nov 26 11:44:33 compute-0 podman[130421]: 2025-11-26 11:44:33.581870864 +0000 UTC m=+0.016343496 container died 84613a60a483e88d0745ed119b24be735f143f45b143e763d27a9d350d3eed4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_williams, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:44:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-5510116b83fcc361e47ca336dd2feb8ae8b756d3f7c5fbb7f4cd70646feab31e-merged.mount: Deactivated successfully.
Nov 26 11:44:33 compute-0 podman[130421]: 2025-11-26 11:44:33.611001923 +0000 UTC m=+0.045474555 container remove 84613a60a483e88d0745ed119b24be735f143f45b143e763d27a9d350d3eed4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:44:33 compute-0 systemd[1]: libpod-conmon-84613a60a483e88d0745ed119b24be735f143f45b143e763d27a9d350d3eed4f.scope: Deactivated successfully.
Nov 26 11:44:33 compute-0 python3.9[130413]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:44:33 compute-0 sudo[130410]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:33 compute-0 sudo[129983]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:33 compute-0 sudo[130434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:44:33 compute-0 sudo[130434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:44:33 compute-0 sudo[130434]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:33 compute-0 sudo[130482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:44:33 compute-0 sudo[130482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:44:33 compute-0 sudo[130482]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:33 compute-0 sudo[130507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:44:33 compute-0 sudo[130507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:44:33 compute-0 sudo[130507]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:33 compute-0 sudo[130532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -- lvm list --format json
Nov 26 11:44:33 compute-0 sudo[130532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:44:33 compute-0 sshd-session[129163]: Connection closed by 192.168.122.30 port 52854
Nov 26 11:44:33 compute-0 sshd-session[129160]: pam_unix(sshd:session): session closed for user zuul
Nov 26 11:44:33 compute-0 systemd[1]: session-41.scope: Deactivated successfully.
Nov 26 11:44:33 compute-0 systemd[1]: session-41.scope: Consumed 2.657s CPU time.
Nov 26 11:44:33 compute-0 systemd-logind[744]: Session 41 logged out. Waiting for processes to exit.
Nov 26 11:44:33 compute-0 systemd-logind[744]: Removed session 41.
Nov 26 11:44:34 compute-0 podman[130588]: 2025-11-26 11:44:34.031897922 +0000 UTC m=+0.028223347 container create beb9dc199d1a39b94ee64ecf2a7310dd5b6be7604379d5244bce2a4d5907df04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 26 11:44:34 compute-0 systemd[1]: Started libpod-conmon-beb9dc199d1a39b94ee64ecf2a7310dd5b6be7604379d5244bce2a4d5907df04.scope.
Nov 26 11:44:34 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:44:34 compute-0 podman[130588]: 2025-11-26 11:44:34.081134447 +0000 UTC m=+0.077459871 container init beb9dc199d1a39b94ee64ecf2a7310dd5b6be7604379d5244bce2a4d5907df04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_antonelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 26 11:44:34 compute-0 podman[130588]: 2025-11-26 11:44:34.087029152 +0000 UTC m=+0.083354576 container start beb9dc199d1a39b94ee64ecf2a7310dd5b6be7604379d5244bce2a4d5907df04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_antonelli, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 26 11:44:34 compute-0 podman[130588]: 2025-11-26 11:44:34.088484058 +0000 UTC m=+0.084809482 container attach beb9dc199d1a39b94ee64ecf2a7310dd5b6be7604379d5244bce2a4d5907df04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_antonelli, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:44:34 compute-0 musing_antonelli[130601]: 167 167
Nov 26 11:44:34 compute-0 systemd[1]: libpod-beb9dc199d1a39b94ee64ecf2a7310dd5b6be7604379d5244bce2a4d5907df04.scope: Deactivated successfully.
Nov 26 11:44:34 compute-0 conmon[130601]: conmon beb9dc199d1a39b94ee6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-beb9dc199d1a39b94ee64ecf2a7310dd5b6be7604379d5244bce2a4d5907df04.scope/container/memory.events
Nov 26 11:44:34 compute-0 podman[130588]: 2025-11-26 11:44:34.093253019 +0000 UTC m=+0.089578443 container died beb9dc199d1a39b94ee64ecf2a7310dd5b6be7604379d5244bce2a4d5907df04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_antonelli, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 26 11:44:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-dddbe21242747fac15860613327eff311532afd4684b51cc39f969c9060f18d9-merged.mount: Deactivated successfully.
Nov 26 11:44:34 compute-0 podman[130588]: 2025-11-26 11:44:34.113066282 +0000 UTC m=+0.109391706 container remove beb9dc199d1a39b94ee64ecf2a7310dd5b6be7604379d5244bce2a4d5907df04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_antonelli, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 26 11:44:34 compute-0 podman[130588]: 2025-11-26 11:44:34.020153227 +0000 UTC m=+0.016478652 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:44:34 compute-0 systemd[1]: libpod-conmon-beb9dc199d1a39b94ee64ecf2a7310dd5b6be7604379d5244bce2a4d5907df04.scope: Deactivated successfully.
Nov 26 11:44:34 compute-0 podman[130623]: 2025-11-26 11:44:34.222276055 +0000 UTC m=+0.027408941 container create 1485c7e9a6508f5c739f5eda2ecacc8587da28bb9df8869cb2b1c04c260c0838 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:44:34 compute-0 systemd[1]: Started libpod-conmon-1485c7e9a6508f5c739f5eda2ecacc8587da28bb9df8869cb2b1c04c260c0838.scope.
Nov 26 11:44:34 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:44:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebbce79fbd21690464a0e53a8abc7e4431c7235c54eab866a91dce4aafe2963f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:44:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebbce79fbd21690464a0e53a8abc7e4431c7235c54eab866a91dce4aafe2963f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:44:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebbce79fbd21690464a0e53a8abc7e4431c7235c54eab866a91dce4aafe2963f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:44:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebbce79fbd21690464a0e53a8abc7e4431c7235c54eab866a91dce4aafe2963f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:44:34 compute-0 podman[130623]: 2025-11-26 11:44:34.288128258 +0000 UTC m=+0.093261164 container init 1485c7e9a6508f5c739f5eda2ecacc8587da28bb9df8869cb2b1c04c260c0838 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 26 11:44:34 compute-0 podman[130623]: 2025-11-26 11:44:34.292579209 +0000 UTC m=+0.097712094 container start 1485c7e9a6508f5c739f5eda2ecacc8587da28bb9df8869cb2b1c04c260c0838 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_joliot, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:44:34 compute-0 podman[130623]: 2025-11-26 11:44:34.294105308 +0000 UTC m=+0.099238214 container attach 1485c7e9a6508f5c739f5eda2ecacc8587da28bb9df8869cb2b1c04c260c0838 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 26 11:44:34 compute-0 podman[130623]: 2025-11-26 11:44:34.21228046 +0000 UTC m=+0.017413366 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:44:34 compute-0 ceph-mon[74928]: pgmap v280: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]: {
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:     "0": [
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:         {
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:             "devices": [
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:                 "/dev/loop3"
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:             ],
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:             "lv_name": "ceph_lv0",
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:             "lv_size": "21470642176",
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a9ad59a0-aa2e-4d92-b571-519d2d145b6a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:             "lv_uuid": "Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ",
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:             "name": "ceph_lv0",
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:             "tags": {
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:                 "ceph.block_uuid": "Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ",
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:                 "ceph.cluster_name": "ceph",
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:                 "ceph.crush_device_class": "",
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:                 "ceph.encrypted": "0",
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:                 "ceph.osd_fsid": "a9ad59a0-aa2e-4d92-b571-519d2d145b6a",
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:                 "ceph.osd_id": "0",
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:                 "ceph.type": "block",
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:                 "ceph.vdo": "0"
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:             },
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:             "type": "block",
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:             "vg_name": "ceph_vg0"
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:         }
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:     ],
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:     "1": [
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:         {
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:             "devices": [
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:                 "/dev/loop4"
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:             ],
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:             "lv_name": "ceph_lv1",
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:             "lv_size": "21470642176",
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2627095b-eef8-4027-bfef-68bf7cb6801f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:             "lv_uuid": "T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS",
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:             "name": "ceph_lv1",
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:             "tags": {
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:                 "ceph.block_uuid": "T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS",
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:                 "ceph.cluster_name": "ceph",
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:                 "ceph.crush_device_class": "",
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:                 "ceph.encrypted": "0",
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:                 "ceph.osd_fsid": "2627095b-eef8-4027-bfef-68bf7cb6801f",
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:                 "ceph.osd_id": "1",
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:                 "ceph.type": "block",
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:                 "ceph.vdo": "0"
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:             },
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:             "type": "block",
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:             "vg_name": "ceph_vg1"
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:         }
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:     ],
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:     "2": [
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:         {
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:             "devices": [
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:                 "/dev/loop5"
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:             ],
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:             "lv_name": "ceph_lv2",
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:             "lv_size": "21470642176",
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d56156fb-7361-4bef-b06b-1320109b4323,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:             "lv_uuid": "PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF",
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:             "name": "ceph_lv2",
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:             "tags": {
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:                 "ceph.block_uuid": "PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF",
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:                 "ceph.cluster_name": "ceph",
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:                 "ceph.crush_device_class": "",
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:                 "ceph.encrypted": "0",
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:                 "ceph.osd_fsid": "d56156fb-7361-4bef-b06b-1320109b4323",
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:                 "ceph.osd_id": "2",
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:                 "ceph.type": "block",
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:                 "ceph.vdo": "0"
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:             },
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:             "type": "block",
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:             "vg_name": "ceph_vg2"
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:         }
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]:     ]
Nov 26 11:44:34 compute-0 vigilant_joliot[130637]: }
Nov 26 11:44:34 compute-0 systemd[1]: libpod-1485c7e9a6508f5c739f5eda2ecacc8587da28bb9df8869cb2b1c04c260c0838.scope: Deactivated successfully.
Nov 26 11:44:34 compute-0 podman[130623]: 2025-11-26 11:44:34.925010219 +0000 UTC m=+0.730143106 container died 1485c7e9a6508f5c739f5eda2ecacc8587da28bb9df8869cb2b1c04c260c0838 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:44:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-ebbce79fbd21690464a0e53a8abc7e4431c7235c54eab866a91dce4aafe2963f-merged.mount: Deactivated successfully.
Nov 26 11:44:34 compute-0 podman[130623]: 2025-11-26 11:44:34.95355914 +0000 UTC m=+0.758692026 container remove 1485c7e9a6508f5c739f5eda2ecacc8587da28bb9df8869cb2b1c04c260c0838 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_joliot, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Nov 26 11:44:34 compute-0 systemd[1]: libpod-conmon-1485c7e9a6508f5c739f5eda2ecacc8587da28bb9df8869cb2b1c04c260c0838.scope: Deactivated successfully.
Nov 26 11:44:34 compute-0 sudo[130532]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:35 compute-0 sudo[130657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:44:35 compute-0 sudo[130657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:44:35 compute-0 sudo[130657]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:35 compute-0 sudo[130682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:44:35 compute-0 sudo[130682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:44:35 compute-0 sudo[130682]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:35 compute-0 sudo[130707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:44:35 compute-0 sudo[130707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:44:35 compute-0 sudo[130707]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:35 compute-0 sudo[130732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -- raw list --format json
Nov 26 11:44:35 compute-0 sudo[130732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:44:35 compute-0 podman[130787]: 2025-11-26 11:44:35.343619924 +0000 UTC m=+0.023912722 container create 9478a646223bedcf2402c5c6ea968443298beb1c989ae6b9c409305b6125ee3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:44:35 compute-0 systemd[1]: Started libpod-conmon-9478a646223bedcf2402c5c6ea968443298beb1c989ae6b9c409305b6125ee3d.scope.
Nov 26 11:44:35 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:44:35 compute-0 podman[130787]: 2025-11-26 11:44:35.400168257 +0000 UTC m=+0.080461066 container init 9478a646223bedcf2402c5c6ea968443298beb1c989ae6b9c409305b6125ee3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 26 11:44:35 compute-0 podman[130787]: 2025-11-26 11:44:35.405328106 +0000 UTC m=+0.085620913 container start 9478a646223bedcf2402c5c6ea968443298beb1c989ae6b9c409305b6125ee3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:44:35 compute-0 podman[130787]: 2025-11-26 11:44:35.406515817 +0000 UTC m=+0.086808625 container attach 9478a646223bedcf2402c5c6ea968443298beb1c989ae6b9c409305b6125ee3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:44:35 compute-0 gallant_jackson[130801]: 167 167
Nov 26 11:44:35 compute-0 systemd[1]: libpod-9478a646223bedcf2402c5c6ea968443298beb1c989ae6b9c409305b6125ee3d.scope: Deactivated successfully.
Nov 26 11:44:35 compute-0 podman[130787]: 2025-11-26 11:44:35.408742279 +0000 UTC m=+0.089035088 container died 9478a646223bedcf2402c5c6ea968443298beb1c989ae6b9c409305b6125ee3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_jackson, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 26 11:44:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-8306ce8217fb7d761df376f3dcb66d45d0bcc382fd86d207d6290271cda7c85a-merged.mount: Deactivated successfully.
Nov 26 11:44:35 compute-0 podman[130787]: 2025-11-26 11:44:35.334219692 +0000 UTC m=+0.014512521 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:44:35 compute-0 podman[130787]: 2025-11-26 11:44:35.435672344 +0000 UTC m=+0.115965142 container remove 9478a646223bedcf2402c5c6ea968443298beb1c989ae6b9c409305b6125ee3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_jackson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 26 11:44:35 compute-0 systemd[1]: libpod-conmon-9478a646223bedcf2402c5c6ea968443298beb1c989ae6b9c409305b6125ee3d.scope: Deactivated successfully.
Nov 26 11:44:35 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v281: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:44:35 compute-0 podman[130824]: 2025-11-26 11:44:35.547334455 +0000 UTC m=+0.028853237 container create d8aaced1e400e79d9a20d1d514a85ddf34224db2b4a3b3df8f852626613b0e47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_dijkstra, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:44:35 compute-0 systemd[1]: Started libpod-conmon-d8aaced1e400e79d9a20d1d514a85ddf34224db2b4a3b3df8f852626613b0e47.scope.
Nov 26 11:44:35 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:44:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5f13e3d007e2b931d48acda39a41e875eeb68af830a8de158e1b356cd868748/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:44:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5f13e3d007e2b931d48acda39a41e875eeb68af830a8de158e1b356cd868748/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:44:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5f13e3d007e2b931d48acda39a41e875eeb68af830a8de158e1b356cd868748/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:44:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5f13e3d007e2b931d48acda39a41e875eeb68af830a8de158e1b356cd868748/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:44:35 compute-0 podman[130824]: 2025-11-26 11:44:35.601292751 +0000 UTC m=+0.082811512 container init d8aaced1e400e79d9a20d1d514a85ddf34224db2b4a3b3df8f852626613b0e47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_dijkstra, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:44:35 compute-0 podman[130824]: 2025-11-26 11:44:35.606631256 +0000 UTC m=+0.088150008 container start d8aaced1e400e79d9a20d1d514a85ddf34224db2b4a3b3df8f852626613b0e47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 26 11:44:35 compute-0 podman[130824]: 2025-11-26 11:44:35.608663062 +0000 UTC m=+0.090181833 container attach d8aaced1e400e79d9a20d1d514a85ddf34224db2b4a3b3df8f852626613b0e47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_dijkstra, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 26 11:44:35 compute-0 podman[130824]: 2025-11-26 11:44:35.535191638 +0000 UTC m=+0.016710389 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:44:35 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Nov 26 11:44:35 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Nov 26 11:44:36 compute-0 relaxed_dijkstra[130837]: {
Nov 26 11:44:36 compute-0 relaxed_dijkstra[130837]:     "2627095b-eef8-4027-bfef-68bf7cb6801f": {
Nov 26 11:44:36 compute-0 relaxed_dijkstra[130837]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:44:36 compute-0 relaxed_dijkstra[130837]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 11:44:36 compute-0 relaxed_dijkstra[130837]:         "osd_id": 1,
Nov 26 11:44:36 compute-0 relaxed_dijkstra[130837]:         "osd_uuid": "2627095b-eef8-4027-bfef-68bf7cb6801f",
Nov 26 11:44:36 compute-0 relaxed_dijkstra[130837]:         "type": "bluestore"
Nov 26 11:44:36 compute-0 relaxed_dijkstra[130837]:     },
Nov 26 11:44:36 compute-0 relaxed_dijkstra[130837]:     "a9ad59a0-aa2e-4d92-b571-519d2d145b6a": {
Nov 26 11:44:36 compute-0 relaxed_dijkstra[130837]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:44:36 compute-0 relaxed_dijkstra[130837]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 11:44:36 compute-0 relaxed_dijkstra[130837]:         "osd_id": 0,
Nov 26 11:44:36 compute-0 relaxed_dijkstra[130837]:         "osd_uuid": "a9ad59a0-aa2e-4d92-b571-519d2d145b6a",
Nov 26 11:44:36 compute-0 relaxed_dijkstra[130837]:         "type": "bluestore"
Nov 26 11:44:36 compute-0 relaxed_dijkstra[130837]:     },
Nov 26 11:44:36 compute-0 relaxed_dijkstra[130837]:     "d56156fb-7361-4bef-b06b-1320109b4323": {
Nov 26 11:44:36 compute-0 relaxed_dijkstra[130837]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:44:36 compute-0 relaxed_dijkstra[130837]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 11:44:36 compute-0 relaxed_dijkstra[130837]:         "osd_id": 2,
Nov 26 11:44:36 compute-0 relaxed_dijkstra[130837]:         "osd_uuid": "d56156fb-7361-4bef-b06b-1320109b4323",
Nov 26 11:44:36 compute-0 relaxed_dijkstra[130837]:         "type": "bluestore"
Nov 26 11:44:36 compute-0 relaxed_dijkstra[130837]:     }
Nov 26 11:44:36 compute-0 relaxed_dijkstra[130837]: }
Nov 26 11:44:36 compute-0 systemd[1]: libpod-d8aaced1e400e79d9a20d1d514a85ddf34224db2b4a3b3df8f852626613b0e47.scope: Deactivated successfully.
Nov 26 11:44:36 compute-0 podman[130870]: 2025-11-26 11:44:36.387539148 +0000 UTC m=+0.017006438 container died d8aaced1e400e79d9a20d1d514a85ddf34224db2b4a3b3df8f852626613b0e47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_dijkstra, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:44:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-a5f13e3d007e2b931d48acda39a41e875eeb68af830a8de158e1b356cd868748-merged.mount: Deactivated successfully.
Nov 26 11:44:36 compute-0 podman[130870]: 2025-11-26 11:44:36.41625305 +0000 UTC m=+0.045720320 container remove d8aaced1e400e79d9a20d1d514a85ddf34224db2b4a3b3df8f852626613b0e47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:44:36 compute-0 systemd[1]: libpod-conmon-d8aaced1e400e79d9a20d1d514a85ddf34224db2b4a3b3df8f852626613b0e47.scope: Deactivated successfully.
Nov 26 11:44:36 compute-0 sudo[130732]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:36 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 11:44:36 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:44:36 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 11:44:36 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:44:36 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 141ec542-0030-4f3b-abc7-669fae37db33 does not exist
Nov 26 11:44:36 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev b900e963-ed47-4378-b330-165dad1bf5b2 does not exist
Nov 26 11:44:36 compute-0 sudo[130882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:44:36 compute-0 sudo[130882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:44:36 compute-0 sudo[130882]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:36 compute-0 sudo[130907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 26 11:44:36 compute-0 sudo[130907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:44:36 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:44:36 compute-0 sudo[130907]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:36 compute-0 ceph-mon[74928]: pgmap v281: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:44:36 compute-0 ceph-mon[74928]: 6.2 scrub starts
Nov 26 11:44:36 compute-0 ceph-mon[74928]: 6.2 scrub ok
Nov 26 11:44:36 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:44:36 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:44:36 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Nov 26 11:44:36 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Nov 26 11:44:37 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v282: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:44:37 compute-0 ceph-mon[74928]: 9.1 scrub starts
Nov 26 11:44:37 compute-0 ceph-mon[74928]: 9.1 scrub ok
Nov 26 11:44:38 compute-0 ceph-mon[74928]: pgmap v282: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:44:38 compute-0 sshd-session[130932]: Accepted publickey for zuul from 192.168.122.30 port 36128 ssh2: ECDSA SHA256:Ri1FttUDYah1mI7cQP4QF8tE4Jqe/KzhJvhCRDggR5A
Nov 26 11:44:38 compute-0 systemd-logind[744]: New session 42 of user zuul.
Nov 26 11:44:38 compute-0 systemd[1]: Started Session 42 of User zuul.
Nov 26 11:44:38 compute-0 sshd-session[130932]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 26 11:44:39 compute-0 python3.9[131085]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 11:44:39 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v283: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:44:39 compute-0 sudo[131239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swzfycogdhzqtqusignqxcywbppketqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157479.7465823-34-146280557687176/AnsiballZ_setup.py'
Nov 26 11:44:39 compute-0 sudo[131239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:44:40 compute-0 python3.9[131241]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 11:44:40 compute-0 sudo[131239]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:40 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Nov 26 11:44:40 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Nov 26 11:44:40 compute-0 ceph-mon[74928]: pgmap v283: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:44:40 compute-0 ceph-mon[74928]: 6.4 scrub starts
Nov 26 11:44:40 compute-0 sudo[131323]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shfggkiadvoaagovltzhvbujcnutxfhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157479.7465823-34-146280557687176/AnsiballZ_dnf.py'
Nov 26 11:44:40 compute-0 sudo[131323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:44:40 compute-0 python3.9[131325]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 26 11:44:40 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Nov 26 11:44:40 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Nov 26 11:44:41 compute-0 ceph-mgr[75197]: [balancer INFO root] Optimize plan auto_2025-11-26_11:44:41
Nov 26 11:44:41 compute-0 ceph-mgr[75197]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 11:44:41 compute-0 ceph-mgr[75197]: [balancer INFO root] do_upmap
Nov 26 11:44:41 compute-0 ceph-mgr[75197]: [balancer INFO root] pools ['images', 'backups', '.rgw.root', 'vms', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.log', '.mgr', 'volumes']
Nov 26 11:44:41 compute-0 ceph-mgr[75197]: [balancer INFO root] prepared 0/10 changes
Nov 26 11:44:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:44:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:44:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:44:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:44:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:44:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:44:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 11:44:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 11:44:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 11:44:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 11:44:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 11:44:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 11:44:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 11:44:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 11:44:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 11:44:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 11:44:41 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v284: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:44:41 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 6.c deep-scrub starts
Nov 26 11:44:41 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 6.c deep-scrub ok
Nov 26 11:44:41 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:44:41 compute-0 ceph-mon[74928]: 6.4 scrub ok
Nov 26 11:44:41 compute-0 ceph-mon[74928]: 9.3 scrub starts
Nov 26 11:44:41 compute-0 ceph-mon[74928]: 9.3 scrub ok
Nov 26 11:44:41 compute-0 ceph-mon[74928]: 6.c deep-scrub starts
Nov 26 11:44:41 compute-0 sudo[131323]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:42 compute-0 python3.9[131476]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:44:42 compute-0 ceph-mon[74928]: pgmap v284: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:44:42 compute-0 ceph-mon[74928]: 6.c deep-scrub ok
Nov 26 11:44:43 compute-0 python3.9[131627]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 26 11:44:43 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v285: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:44:43 compute-0 python3.9[131777]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 11:44:44 compute-0 python3.9[131927]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 11:44:44 compute-0 ceph-mon[74928]: pgmap v285: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:44:44 compute-0 sshd-session[130935]: Connection closed by 192.168.122.30 port 36128
Nov 26 11:44:44 compute-0 sshd-session[130932]: pam_unix(sshd:session): session closed for user zuul
Nov 26 11:44:44 compute-0 systemd[1]: session-42.scope: Deactivated successfully.
Nov 26 11:44:44 compute-0 systemd[1]: session-42.scope: Consumed 4.232s CPU time.
Nov 26 11:44:44 compute-0 systemd-logind[744]: Session 42 logged out. Waiting for processes to exit.
Nov 26 11:44:44 compute-0 systemd-logind[744]: Removed session 42.
Nov 26 11:44:44 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Nov 26 11:44:44 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Nov 26 11:44:45 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v286: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:44:45 compute-0 ceph-mon[74928]: 9.1b scrub starts
Nov 26 11:44:45 compute-0 ceph-mon[74928]: 9.1b scrub ok
Nov 26 11:44:45 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Nov 26 11:44:45 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Nov 26 11:44:46 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:44:46 compute-0 ceph-mon[74928]: pgmap v286: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:44:46 compute-0 ceph-mon[74928]: 9.1d scrub starts
Nov 26 11:44:46 compute-0 ceph-mon[74928]: 9.1d scrub ok
Nov 26 11:44:47 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 6.b scrub starts
Nov 26 11:44:47 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 6.b scrub ok
Nov 26 11:44:47 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v287: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:44:47 compute-0 ceph-mon[74928]: 6.b scrub starts
Nov 26 11:44:47 compute-0 ceph-mon[74928]: 6.b scrub ok
Nov 26 11:44:47 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 9.5 deep-scrub starts
Nov 26 11:44:47 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 9.5 deep-scrub ok
Nov 26 11:44:48 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 6.d scrub starts
Nov 26 11:44:48 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 6.d scrub ok
Nov 26 11:44:48 compute-0 ceph-mon[74928]: pgmap v287: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:44:48 compute-0 ceph-mon[74928]: 9.5 deep-scrub starts
Nov 26 11:44:48 compute-0 ceph-mon[74928]: 9.5 deep-scrub ok
Nov 26 11:44:48 compute-0 ceph-mon[74928]: 6.d scrub starts
Nov 26 11:44:48 compute-0 ceph-mon[74928]: 6.d scrub ok
Nov 26 11:44:48 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 9.b scrub starts
Nov 26 11:44:48 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 9.b scrub ok
Nov 26 11:44:49 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v288: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:44:49 compute-0 ceph-mon[74928]: 9.b scrub starts
Nov 26 11:44:49 compute-0 ceph-mon[74928]: 9.b scrub ok
Nov 26 11:44:49 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Nov 26 11:44:49 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Nov 26 11:44:50 compute-0 sshd-session[131952]: Accepted publickey for zuul from 192.168.122.30 port 55746 ssh2: ECDSA SHA256:Ri1FttUDYah1mI7cQP4QF8tE4Jqe/KzhJvhCRDggR5A
Nov 26 11:44:50 compute-0 systemd-logind[744]: New session 43 of user zuul.
Nov 26 11:44:50 compute-0 systemd[1]: Started Session 43 of User zuul.
Nov 26 11:44:50 compute-0 sshd-session[131952]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 26 11:44:50 compute-0 ceph-mon[74928]: pgmap v288: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:44:50 compute-0 ceph-mon[74928]: 9.11 scrub starts
Nov 26 11:44:50 compute-0 ceph-mon[74928]: 9.11 scrub ok
Nov 26 11:44:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 11:44:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:44:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 11:44:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:44:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:44:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:44:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:44:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:44:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:44:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:44:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:44:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:44:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 26 11:44:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:44:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:44:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:44:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 11:44:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:44:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 11:44:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:44:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:44:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:44:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 11:44:50 compute-0 python3.9[132105]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 11:44:51 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Nov 26 11:44:51 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Nov 26 11:44:51 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v289: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:44:51 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:44:51 compute-0 ceph-mon[74928]: 9.15 scrub starts
Nov 26 11:44:51 compute-0 ceph-mon[74928]: 9.15 scrub ok
Nov 26 11:44:52 compute-0 sudo[132259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvklbkcflwwnsnvmhqmtluehrccirsva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157491.7059095-50-159910579872396/AnsiballZ_file.py'
Nov 26 11:44:52 compute-0 sudo[132259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:44:52 compute-0 python3.9[132261]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:44:52 compute-0 sudo[132259]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:52 compute-0 sudo[132411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfcayiscixjorxgkvrcalfqeeaqkflcb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157492.2672348-50-43033667371703/AnsiballZ_file.py'
Nov 26 11:44:52 compute-0 sudo[132411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:44:52 compute-0 python3.9[132413]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:44:52 compute-0 sudo[132411]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:52 compute-0 ceph-mon[74928]: pgmap v289: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:44:53 compute-0 sudo[132563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzjkhwkcsqlsdmreglobeozmdmoeumkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157492.7365544-65-111684726283593/AnsiballZ_stat.py'
Nov 26 11:44:53 compute-0 sudo[132563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:44:53 compute-0 python3.9[132565]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:44:53 compute-0 sudo[132563]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:53 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Nov 26 11:44:53 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Nov 26 11:44:53 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v290: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:44:53 compute-0 sudo[132686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klouawmgsaiicnwvzhhyzidwsxzqsxsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157492.7365544-65-111684726283593/AnsiballZ_copy.py'
Nov 26 11:44:53 compute-0 sudo[132686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:44:53 compute-0 ceph-mon[74928]: 9.1f scrub starts
Nov 26 11:44:53 compute-0 ceph-mon[74928]: 9.1f scrub ok
Nov 26 11:44:53 compute-0 python3.9[132688]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764157492.7365544-65-111684726283593/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=f8ec3b12e10c3fcbbe441311784139e9e3ac66d9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:44:53 compute-0 sudo[132686]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:53 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Nov 26 11:44:53 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Nov 26 11:44:53 compute-0 sudo[132838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwsvastcyiebgcnoqejnpjkqdooavnuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157493.8157058-65-22481584281380/AnsiballZ_stat.py'
Nov 26 11:44:53 compute-0 sudo[132838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:44:54 compute-0 python3.9[132840]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:44:54 compute-0 sudo[132838]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:54 compute-0 sudo[132961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qljxihgcllcuzpgapwcmzrbujydmnrkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157493.8157058-65-22481584281380/AnsiballZ_copy.py'
Nov 26 11:44:54 compute-0 sudo[132961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:44:54 compute-0 python3.9[132963]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764157493.8157058-65-22481584281380/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=14b03dac4f2a2bd525f670a4adfe3824aa3c15c4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:44:54 compute-0 sudo[132961]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:54 compute-0 ceph-mon[74928]: pgmap v290: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:44:54 compute-0 ceph-mon[74928]: 6.3 scrub starts
Nov 26 11:44:54 compute-0 ceph-mon[74928]: 6.3 scrub ok
Nov 26 11:44:54 compute-0 ceph-mon[74928]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Nov 26 11:44:54 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:44:54.630301) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 26 11:44:54 compute-0 ceph-mon[74928]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Nov 26 11:44:54 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764157494630361, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7251, "num_deletes": 251, "total_data_size": 9518398, "memory_usage": 9804704, "flush_reason": "Manual Compaction"}
Nov 26 11:44:54 compute-0 ceph-mon[74928]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Nov 26 11:44:54 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764157494643162, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 7598290, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 134, "largest_seqno": 7382, "table_properties": {"data_size": 7571636, "index_size": 17259, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8325, "raw_key_size": 77358, "raw_average_key_size": 23, "raw_value_size": 7508257, "raw_average_value_size": 2267, "num_data_blocks": 758, "num_entries": 3311, "num_filter_entries": 3311, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764157081, "oldest_key_time": 1764157081, "file_creation_time": 1764157494, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "363c2a1d-8d28-40b7-a8ff-7233f1c9b7d5", "db_session_id": "CJT49RLFB1C6KNYXG0ER", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Nov 26 11:44:54 compute-0 ceph-mon[74928]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 12895 microseconds, and 10123 cpu microseconds.
Nov 26 11:44:54 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:44:54.643192) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 7598290 bytes OK
Nov 26 11:44:54 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:44:54.643211) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Nov 26 11:44:54 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:44:54.644032) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Nov 26 11:44:54 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:44:54.644042) EVENT_LOG_v1 {"time_micros": 1764157494644039, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Nov 26 11:44:54 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:44:54.644069) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Nov 26 11:44:54 compute-0 ceph-mon[74928]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 9486662, prev total WAL file size 9486662, number of live WAL files 2.
Nov 26 11:44:54 compute-0 ceph-mon[74928]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 11:44:54 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:44:54.646037) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Nov 26 11:44:54 compute-0 ceph-mon[74928]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Nov 26 11:44:54 compute-0 ceph-mon[74928]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(7420KB) 13(52KB) 8(1944B)]
Nov 26 11:44:54 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764157494646103, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 7654212, "oldest_snapshot_seqno": -1}
Nov 26 11:44:54 compute-0 ceph-mon[74928]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3125 keys, 7610207 bytes, temperature: kUnknown
Nov 26 11:44:54 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764157494660120, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 7610207, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7583990, "index_size": 17275, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7877, "raw_key_size": 75389, "raw_average_key_size": 24, "raw_value_size": 7522268, "raw_average_value_size": 2407, "num_data_blocks": 760, "num_entries": 3125, "num_filter_entries": 3125, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764157079, "oldest_key_time": 0, "file_creation_time": 1764157494, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "363c2a1d-8d28-40b7-a8ff-7233f1c9b7d5", "db_session_id": "CJT49RLFB1C6KNYXG0ER", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Nov 26 11:44:54 compute-0 ceph-mon[74928]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 11:44:54 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:44:54.660338) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 7610207 bytes
Nov 26 11:44:54 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:44:54.660694) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 540.8 rd, 537.7 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(7.3, 0.0 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3414, records dropped: 289 output_compression: NoCompression
Nov 26 11:44:54 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:44:54.660707) EVENT_LOG_v1 {"time_micros": 1764157494660702, "job": 4, "event": "compaction_finished", "compaction_time_micros": 14154, "compaction_time_cpu_micros": 11134, "output_level": 6, "num_output_files": 1, "total_output_size": 7610207, "num_input_records": 3414, "num_output_records": 3125, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 26 11:44:54 compute-0 ceph-mon[74928]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 11:44:54 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764157494661777, "job": 4, "event": "table_file_deletion", "file_number": 19}
Nov 26 11:44:54 compute-0 ceph-mon[74928]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 11:44:54 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764157494661945, "job": 4, "event": "table_file_deletion", "file_number": 13}
Nov 26 11:44:54 compute-0 ceph-mon[74928]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 11:44:54 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764157494662113, "job": 4, "event": "table_file_deletion", "file_number": 8}
Nov 26 11:44:54 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:44:54.645972) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 11:44:54 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Nov 26 11:44:54 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Nov 26 11:44:54 compute-0 sudo[133114]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhwynxizpfxpynafzlovneurwwjkhgpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157494.617858-65-130615084841809/AnsiballZ_stat.py'
Nov 26 11:44:54 compute-0 sudo[133114]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:44:54 compute-0 python3.9[133116]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:44:54 compute-0 sudo[133114]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:55 compute-0 sudo[133237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztdnljfcsqgwapjrvgrhpsuqkyelvmfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157494.617858-65-130615084841809/AnsiballZ_copy.py'
Nov 26 11:44:55 compute-0 sudo[133237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:44:55 compute-0 python3.9[133239]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764157494.617858-65-130615084841809/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=3ddd59a5ae9188c525e6d48336e7a1602a7054f7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:44:55 compute-0 sudo[133237]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:55 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v291: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:44:55 compute-0 ceph-mon[74928]: 6.7 scrub starts
Nov 26 11:44:55 compute-0 ceph-mon[74928]: 6.7 scrub ok
Nov 26 11:44:55 compute-0 sudo[133389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iiwdscqjdgpgcdriaszifamyzwzmxyuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157495.5211775-109-208988580497024/AnsiballZ_file.py'
Nov 26 11:44:55 compute-0 sudo[133389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:44:55 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Nov 26 11:44:55 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Nov 26 11:44:55 compute-0 python3.9[133391]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:44:55 compute-0 sudo[133389]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:56 compute-0 sudo[133541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdqjkrclptrdrohauyhhcndfmjqswuwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157495.9522476-109-262748438963720/AnsiballZ_file.py'
Nov 26 11:44:56 compute-0 sudo[133541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:44:56 compute-0 python3.9[133543]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:44:56 compute-0 sudo[133541]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:56 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:44:56 compute-0 sudo[133693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhuozyoijbhsgujeuobtykywgplwkgxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157496.4061246-124-39884953722537/AnsiballZ_stat.py'
Nov 26 11:44:56 compute-0 sudo[133693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:44:56 compute-0 ceph-mon[74928]: pgmap v291: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:44:56 compute-0 ceph-mon[74928]: 6.5 scrub starts
Nov 26 11:44:56 compute-0 ceph-mon[74928]: 6.5 scrub ok
Nov 26 11:44:56 compute-0 python3.9[133695]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:44:56 compute-0 sudo[133693]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:56 compute-0 sudo[133816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmgqjutllzzdgmdxpkbctbrqalkuhxgz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157496.4061246-124-39884953722537/AnsiballZ_copy.py'
Nov 26 11:44:56 compute-0 sudo[133816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:44:57 compute-0 python3.9[133818]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764157496.4061246-124-39884953722537/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=c2db83c4007a033753521286ce876482b759054b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:44:57 compute-0 sudo[133816]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:57 compute-0 sudo[133968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdbopezawlwmkwsdlbksildfvfjgyuey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157497.1991715-124-220246282020372/AnsiballZ_stat.py'
Nov 26 11:44:57 compute-0 sudo[133968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:44:57 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v292: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:44:57 compute-0 python3.9[133970]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:44:57 compute-0 sudo[133968]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:57 compute-0 sudo[134091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upzaqmtxlsyerjzaqqbepnitteivupdp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157497.1991715-124-220246282020372/AnsiballZ_copy.py'
Nov 26 11:44:57 compute-0 sudo[134091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:44:57 compute-0 python3.9[134093]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764157497.1991715-124-220246282020372/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=cb741bfcdb23d16f20319c9874edcfa99972b549 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:44:57 compute-0 sudo[134091]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:58 compute-0 sudo[134243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-baotqjehslyiqqqiwtgktgpmknkfglkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157498.0047977-124-179774823441657/AnsiballZ_stat.py'
Nov 26 11:44:58 compute-0 sudo[134243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:44:58 compute-0 python3.9[134245]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:44:58 compute-0 sudo[134243]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:58 compute-0 sudo[134366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-latprmfbtqqaifcjlzgrnnrkotkyhwqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157498.0047977-124-179774823441657/AnsiballZ_copy.py'
Nov 26 11:44:58 compute-0 sudo[134366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:44:58 compute-0 ceph-mon[74928]: pgmap v292: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:44:58 compute-0 python3.9[134368]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764157498.0047977-124-179774823441657/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=ab35235c5364df79f3ee340ae546e38a8afa5f9f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:44:58 compute-0 sudo[134366]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:59 compute-0 sudo[134518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilpymxkhvsevxxxdtjpdqvxqztvsumzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157498.9050815-168-151867988426214/AnsiballZ_file.py'
Nov 26 11:44:59 compute-0 sudo[134518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:44:59 compute-0 python3.9[134520]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:44:59 compute-0 sudo[134518]: pam_unix(sudo:session): session closed for user root
Nov 26 11:44:59 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v293: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:44:59 compute-0 sudo[134670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsvbxzqkxjoutfaxxruzdurbyvzempdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157499.3692229-168-62189741377489/AnsiballZ_file.py'
Nov 26 11:44:59 compute-0 sudo[134670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:44:59 compute-0 python3.9[134672]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:44:59 compute-0 sudo[134670]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:00 compute-0 sudo[134822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzzumwdoswpxegyalulemfkixvrgioft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157499.8613636-183-15856669598316/AnsiballZ_stat.py'
Nov 26 11:45:00 compute-0 sudo[134822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:00 compute-0 python3.9[134824]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:45:00 compute-0 sudo[134822]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:00 compute-0 sudo[134945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfankvejxwcteyxgareeryezsvneixye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157499.8613636-183-15856669598316/AnsiballZ_copy.py'
Nov 26 11:45:00 compute-0 sudo[134945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:00 compute-0 python3.9[134947]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764157499.8613636-183-15856669598316/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=773776f35146b7b7695d0b1ebb5c55fea9c7f68f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:45:00 compute-0 sudo[134945]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:00 compute-0 ceph-mon[74928]: pgmap v293: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:45:00 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 6.9 deep-scrub starts
Nov 26 11:45:00 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 6.9 deep-scrub ok
Nov 26 11:45:00 compute-0 sudo[135097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmsccbewilrdbdoyyirriknyiacwvskx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157500.7332752-183-232384519423382/AnsiballZ_stat.py'
Nov 26 11:45:00 compute-0 sudo[135097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:01 compute-0 python3.9[135099]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:45:01 compute-0 sudo[135097]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:01 compute-0 sudo[135220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oarkhsesxgoatvdpiaxyaktrovojqhzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157500.7332752-183-232384519423382/AnsiballZ_copy.py'
Nov 26 11:45:01 compute-0 sudo[135220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:01 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v294: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:45:01 compute-0 python3.9[135222]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764157500.7332752-183-232384519423382/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=cb741bfcdb23d16f20319c9874edcfa99972b549 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:45:01 compute-0 sudo[135220]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:01 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:45:01 compute-0 ceph-mon[74928]: 6.9 deep-scrub starts
Nov 26 11:45:01 compute-0 ceph-mon[74928]: 6.9 deep-scrub ok
Nov 26 11:45:01 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 6.a deep-scrub starts
Nov 26 11:45:01 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 6.a deep-scrub ok
Nov 26 11:45:01 compute-0 sudo[135372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tailoraopvessfsnwohqhbjbsoxnxgeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157501.5995955-183-94624670482366/AnsiballZ_stat.py'
Nov 26 11:45:01 compute-0 sudo[135372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:01 compute-0 python3.9[135374]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:45:01 compute-0 sudo[135372]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:02 compute-0 sudo[135495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krqfrqwfomvreymrbnwxlwfmpfmfnkma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157501.5995955-183-94624670482366/AnsiballZ_copy.py'
Nov 26 11:45:02 compute-0 sudo[135495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:02 compute-0 python3.9[135497]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764157501.5995955-183-94624670482366/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=c6148e7eede9fb19aaa5ddb07e3cd11210fbce42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:45:02 compute-0 sudo[135495]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:02 compute-0 ceph-mon[74928]: pgmap v294: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:45:02 compute-0 ceph-mon[74928]: 6.a deep-scrub starts
Nov 26 11:45:02 compute-0 ceph-mon[74928]: 6.a deep-scrub ok
Nov 26 11:45:02 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Nov 26 11:45:02 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Nov 26 11:45:03 compute-0 sudo[135647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgbhswthblnhxocqkzzouxgexawqqzuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157502.9403777-243-277167941886025/AnsiballZ_file.py'
Nov 26 11:45:03 compute-0 sudo[135647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:03 compute-0 python3.9[135649]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:45:03 compute-0 sudo[135647]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:03 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v295: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:45:03 compute-0 sudo[135799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgkihtzdsagkrjkhjaoietpsihykhkek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157503.4171684-251-59423111821229/AnsiballZ_stat.py'
Nov 26 11:45:03 compute-0 sudo[135799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:03 compute-0 ceph-mon[74928]: 9.16 scrub starts
Nov 26 11:45:03 compute-0 ceph-mon[74928]: 9.16 scrub ok
Nov 26 11:45:03 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Nov 26 11:45:03 compute-0 python3.9[135801]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:45:03 compute-0 sudo[135799]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:03 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Nov 26 11:45:03 compute-0 sudo[135922]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-siyafhnqboayrxyvtosjmgifwilkwfps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157503.4171684-251-59423111821229/AnsiballZ_copy.py'
Nov 26 11:45:03 compute-0 sudo[135922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:04 compute-0 python3.9[135924]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764157503.4171684-251-59423111821229/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=cb4a049067962bd1105691478a237a6c6e4bd931 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:45:04 compute-0 sudo[135922]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:04 compute-0 sudo[136074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfscnwghnzougwvbdiasxjxktebnphzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157504.3188434-267-167224608246241/AnsiballZ_file.py'
Nov 26 11:45:04 compute-0 sudo[136074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:04 compute-0 python3.9[136076]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:45:04 compute-0 ceph-mon[74928]: pgmap v295: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:45:04 compute-0 ceph-mon[74928]: 9.1c scrub starts
Nov 26 11:45:04 compute-0 ceph-mon[74928]: 9.1c scrub ok
Nov 26 11:45:04 compute-0 sudo[136074]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:04 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Nov 26 11:45:04 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Nov 26 11:45:04 compute-0 sudo[136226]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfqijfwcunvqzkflkdfaqurgwgehhvlb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157504.7882369-275-202300440881423/AnsiballZ_stat.py'
Nov 26 11:45:04 compute-0 sudo[136226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:05 compute-0 python3.9[136228]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:45:05 compute-0 sudo[136226]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:05 compute-0 sudo[136349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgxdpnyzpydtorsufuauikjwronnajtg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157504.7882369-275-202300440881423/AnsiballZ_copy.py'
Nov 26 11:45:05 compute-0 sudo[136349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:05 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v296: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:45:05 compute-0 python3.9[136351]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764157504.7882369-275-202300440881423/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=cb4a049067962bd1105691478a237a6c6e4bd931 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:45:05 compute-0 sudo[136349]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:05 compute-0 ceph-mon[74928]: 9.1e scrub starts
Nov 26 11:45:05 compute-0 ceph-mon[74928]: 9.1e scrub ok
Nov 26 11:45:05 compute-0 sudo[136501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewskphmxjagvrskgdunqbrbytlfkxcnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157505.6818-291-256207258833978/AnsiballZ_file.py'
Nov 26 11:45:05 compute-0 sudo[136501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:06 compute-0 python3.9[136503]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:45:06 compute-0 sudo[136501]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:06 compute-0 sudo[136653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eipcjijxpgzcwytbezxdvmlwnxokugfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157506.1360807-299-147024643461787/AnsiballZ_stat.py'
Nov 26 11:45:06 compute-0 sudo[136653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:06 compute-0 python3.9[136655]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:45:06 compute-0 sudo[136653]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:06 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:45:06 compute-0 ceph-mon[74928]: pgmap v296: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:45:06 compute-0 sudo[136776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycaboxzhriumqkvyivklwabcujiecpul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157506.1360807-299-147024643461787/AnsiballZ_copy.py'
Nov 26 11:45:06 compute-0 sudo[136776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:06 compute-0 python3.9[136778]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764157506.1360807-299-147024643461787/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=cb4a049067962bd1105691478a237a6c6e4bd931 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:45:06 compute-0 sudo[136776]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:07 compute-0 sudo[136928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlmpdufhzyiewyrlmejougmimlljtkwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157507.0201857-315-247661587912746/AnsiballZ_file.py'
Nov 26 11:45:07 compute-0 sudo[136928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:07 compute-0 python3.9[136930]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:45:07 compute-0 sudo[136928]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:07 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v297: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:45:07 compute-0 sudo[137080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzeuxyzrekolkgqixmsmohihifjdldgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157507.474242-323-191312339961807/AnsiballZ_stat.py'
Nov 26 11:45:07 compute-0 sudo[137080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:07 compute-0 python3.9[137082]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:45:07 compute-0 sudo[137080]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:08 compute-0 sudo[137203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upvthakvjecbscxjufiqlpfndurybwmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157507.474242-323-191312339961807/AnsiballZ_copy.py'
Nov 26 11:45:08 compute-0 sudo[137203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:08 compute-0 python3.9[137205]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764157507.474242-323-191312339961807/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=cb4a049067962bd1105691478a237a6c6e4bd931 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:45:08 compute-0 sudo[137203]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:08 compute-0 sudo[137355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-doowlimjlxjvdchiaxeaqhibgqpkhokr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157508.3544462-339-236794675900313/AnsiballZ_file.py'
Nov 26 11:45:08 compute-0 sudo[137355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:08 compute-0 ceph-mon[74928]: pgmap v297: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:45:08 compute-0 python3.9[137357]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:45:08 compute-0 sudo[137355]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:09 compute-0 sudo[137507]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbjchoqaborlgeaslpurrvbgujyeomul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157508.820613-347-10039940485537/AnsiballZ_stat.py'
Nov 26 11:45:09 compute-0 sudo[137507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:09 compute-0 python3.9[137509]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:45:09 compute-0 sudo[137507]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:09 compute-0 sudo[137630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqnzogjtzqemxxkfaolsaqrnjpcpicns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157508.820613-347-10039940485537/AnsiballZ_copy.py'
Nov 26 11:45:09 compute-0 sudo[137630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:09 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v298: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:45:09 compute-0 python3.9[137632]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764157508.820613-347-10039940485537/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=cb4a049067962bd1105691478a237a6c6e4bd931 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:45:09 compute-0 sudo[137630]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:09 compute-0 sudo[137782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqbawntqyxvkvesboqsjrgqbsyscyxix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157509.710429-363-259238862883945/AnsiballZ_file.py'
Nov 26 11:45:09 compute-0 sudo[137782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:10 compute-0 python3.9[137784]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:45:10 compute-0 sudo[137782]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:10 compute-0 sudo[137934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlvakangnhbfedcnelkjwxbujsshwefs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157510.169464-371-104651604142075/AnsiballZ_stat.py'
Nov 26 11:45:10 compute-0 sudo[137934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:10 compute-0 python3.9[137936]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:45:10 compute-0 sudo[137934]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:10 compute-0 ceph-mon[74928]: pgmap v298: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:45:10 compute-0 sudo[138057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewjkpyvmiulxehfgdxqzyflzqdkzlfhu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157510.169464-371-104651604142075/AnsiballZ_copy.py'
Nov 26 11:45:10 compute-0 sudo[138057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:10 compute-0 python3.9[138059]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764157510.169464-371-104651604142075/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=cb4a049067962bd1105691478a237a6c6e4bd931 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:45:10 compute-0 sudo[138057]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:11 compute-0 sshd-session[131955]: Connection closed by 192.168.122.30 port 55746
Nov 26 11:45:11 compute-0 sshd-session[131952]: pam_unix(sshd:session): session closed for user zuul
Nov 26 11:45:11 compute-0 systemd[1]: session-43.scope: Deactivated successfully.
Nov 26 11:45:11 compute-0 systemd[1]: session-43.scope: Consumed 15.802s CPU time.
Nov 26 11:45:11 compute-0 systemd-logind[744]: Session 43 logged out. Waiting for processes to exit.
Nov 26 11:45:11 compute-0 systemd-logind[744]: Removed session 43.
Nov 26 11:45:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:45:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:45:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:45:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:45:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:45:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:45:11 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v299: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:45:11 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:45:12 compute-0 ceph-mon[74928]: pgmap v299: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:45:13 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v300: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:45:14 compute-0 ceph-mon[74928]: pgmap v300: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:45:15 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v301: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:45:16 compute-0 sshd-session[138084]: Accepted publickey for zuul from 192.168.122.30 port 46586 ssh2: ECDSA SHA256:Ri1FttUDYah1mI7cQP4QF8tE4Jqe/KzhJvhCRDggR5A
Nov 26 11:45:16 compute-0 systemd-logind[744]: New session 44 of user zuul.
Nov 26 11:45:16 compute-0 systemd[1]: Started Session 44 of User zuul.
Nov 26 11:45:16 compute-0 sshd-session[138084]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 26 11:45:16 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:45:16 compute-0 sudo[138237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfoxxqfvqvispzloajtbzqcydhkkhlgd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157516.3209424-22-76792711507189/AnsiballZ_file.py'
Nov 26 11:45:16 compute-0 sudo[138237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:16 compute-0 ceph-mon[74928]: pgmap v301: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:45:16 compute-0 python3.9[138239]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:45:16 compute-0 sudo[138237]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:17 compute-0 sudo[138389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxietsnzhrkisfrxelfxqjijprjemvhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157516.952789-34-200496895567429/AnsiballZ_stat.py'
Nov 26 11:45:17 compute-0 sudo[138389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:17 compute-0 python3.9[138391]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:45:17 compute-0 sudo[138389]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:17 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v302: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:45:17 compute-0 sudo[138512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlwotvenpahnuuobqhpddsukjdgkiedp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157516.952789-34-200496895567429/AnsiballZ_copy.py'
Nov 26 11:45:17 compute-0 sudo[138512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:17 compute-0 python3.9[138514]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764157516.952789-34-200496895567429/.source.conf _original_basename=ceph.conf follow=False checksum=eb6eed38f58b4ceb757e2558a126249fe31b4a1f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:45:17 compute-0 sudo[138512]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:18 compute-0 sudo[138664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqqewmnfzbqulwnnjpsjsfhlarncvsdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157518.0040286-34-134516551375417/AnsiballZ_stat.py'
Nov 26 11:45:18 compute-0 sudo[138664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:18 compute-0 python3.9[138666]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:45:18 compute-0 sudo[138664]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:18 compute-0 sudo[138787]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxhcddimlvobnezpuwhndbosmikcalwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157518.0040286-34-134516551375417/AnsiballZ_copy.py'
Nov 26 11:45:18 compute-0 sudo[138787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:18 compute-0 ceph-mon[74928]: pgmap v302: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:45:18 compute-0 python3.9[138789]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764157518.0040286-34-134516551375417/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=034e11544e08d3f6c57ef0872ea08ff526a4e1ef backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:45:18 compute-0 sudo[138787]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:18 compute-0 sshd-session[138087]: Connection closed by 192.168.122.30 port 46586
Nov 26 11:45:18 compute-0 sshd-session[138084]: pam_unix(sshd:session): session closed for user zuul
Nov 26 11:45:19 compute-0 systemd-logind[744]: Session 44 logged out. Waiting for processes to exit.
Nov 26 11:45:19 compute-0 systemd[1]: session-44.scope: Deactivated successfully.
Nov 26 11:45:19 compute-0 systemd[1]: session-44.scope: Consumed 1.701s CPU time.
Nov 26 11:45:19 compute-0 systemd-logind[744]: Removed session 44.
Nov 26 11:45:19 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v303: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:45:20 compute-0 ceph-mon[74928]: pgmap v303: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:45:21 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v304: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:45:21 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:45:22 compute-0 ceph-mon[74928]: pgmap v304: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:45:23 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v305: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:45:24 compute-0 sshd-session[138814]: Accepted publickey for zuul from 192.168.122.30 port 46602 ssh2: ECDSA SHA256:Ri1FttUDYah1mI7cQP4QF8tE4Jqe/KzhJvhCRDggR5A
Nov 26 11:45:24 compute-0 systemd-logind[744]: New session 45 of user zuul.
Nov 26 11:45:24 compute-0 systemd[1]: Started Session 45 of User zuul.
Nov 26 11:45:24 compute-0 sshd-session[138814]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 26 11:45:24 compute-0 ceph-mon[74928]: pgmap v305: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:45:25 compute-0 python3.9[138967]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 11:45:25 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v306: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:45:25 compute-0 sudo[139121]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bczcylegzgxktwgujahobibfhjqhrsbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157525.5673401-34-177412242581179/AnsiballZ_file.py'
Nov 26 11:45:25 compute-0 sudo[139121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:26 compute-0 python3.9[139123]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:45:26 compute-0 sudo[139121]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:26 compute-0 sudo[139273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khzazoojgbskjerpelddmtqylacxxnop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157526.1144717-34-69940707686371/AnsiballZ_file.py'
Nov 26 11:45:26 compute-0 sudo[139273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:26 compute-0 python3.9[139275]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:45:26 compute-0 sudo[139273]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:26 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:45:26 compute-0 ceph-mon[74928]: pgmap v306: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:45:26 compute-0 python3.9[139426]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 11:45:27 compute-0 sudo[139576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shbzbvbbkzvuhymsyuyzazaljyrvzdhg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157527.1247344-57-170450526712313/AnsiballZ_seboolean.py'
Nov 26 11:45:27 compute-0 sudo[139576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:27 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v307: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:45:27 compute-0 python3.9[139578]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 26 11:45:28 compute-0 sudo[139576]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:28 compute-0 ceph-mon[74928]: pgmap v307: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:45:28 compute-0 sudo[139732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chgyioojjujxahkfeortufmgndpdqkhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157528.5535424-67-267516256376562/AnsiballZ_setup.py'
Nov 26 11:45:28 compute-0 dbus-broker-launch[733]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Nov 26 11:45:28 compute-0 sudo[139732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:28 compute-0 python3.9[139734]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 11:45:29 compute-0 sudo[139732]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:29 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v308: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:45:29 compute-0 sudo[139816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmvkdvskkwilqxutcdfosufiavawypnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157528.5535424-67-267516256376562/AnsiballZ_dnf.py'
Nov 26 11:45:29 compute-0 sudo[139816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:29 compute-0 python3.9[139818]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 11:45:30 compute-0 sudo[139816]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:30 compute-0 ceph-mon[74928]: pgmap v308: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:45:31 compute-0 sudo[139969]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sddykzyppnmzonovjvnoqiivymcmavxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157530.7751126-79-95009538378164/AnsiballZ_systemd.py'
Nov 26 11:45:31 compute-0 sudo[139969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:31 compute-0 python3.9[139971]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 26 11:45:31 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v309: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:45:31 compute-0 sudo[139969]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:31 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:45:31 compute-0 sudo[140124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnhfkrdxpnqawsafsmmiqchnrzuozvui ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764157531.6084633-87-155638876324620/AnsiballZ_edpm_nftables_snippet.py'
Nov 26 11:45:31 compute-0 sudo[140124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:32 compute-0 python3[140126]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                             rule:
                                               proto: udp
                                               dport: 4789
                                           - rule_name: 119 neutron geneve networks
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               state: ["UNTRACKED"]
                                           - rule_name: 120 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: OUTPUT
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                           - rule_name: 121 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: PREROUTING
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                            dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Nov 26 11:45:32 compute-0 sudo[140124]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:32 compute-0 sudo[140276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-suzhhwvqiconkxiazzonximtrgzqjrzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157532.2853982-96-126390035440043/AnsiballZ_file.py'
Nov 26 11:45:32 compute-0 sudo[140276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:32 compute-0 python3.9[140278]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:45:32 compute-0 sudo[140276]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:32 compute-0 ceph-mon[74928]: pgmap v309: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:45:33 compute-0 sudo[140428]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhtvemmkzqfkjcnikrxnxhrgqdrseaut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157532.7664976-104-153933308308335/AnsiballZ_stat.py'
Nov 26 11:45:33 compute-0 sudo[140428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:33 compute-0 python3.9[140430]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:45:33 compute-0 sudo[140428]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:33 compute-0 sudo[140506]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttkqckwabpsanfkxfbnapfjxccsvnouc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157532.7664976-104-153933308308335/AnsiballZ_file.py'
Nov 26 11:45:33 compute-0 sudo[140506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:33 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v310: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:45:33 compute-0 python3.9[140508]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:45:33 compute-0 sudo[140506]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:33 compute-0 sudo[140658]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpayykuhrpsgucrwhcvikxfohukqhjca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157533.6899066-116-129878397869253/AnsiballZ_stat.py'
Nov 26 11:45:33 compute-0 sudo[140658]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:34 compute-0 python3.9[140660]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:45:34 compute-0 sudo[140658]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:34 compute-0 sudo[140736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alxsndjzgxjdkwovafoifpcwcuripmbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157533.6899066-116-129878397869253/AnsiballZ_file.py'
Nov 26 11:45:34 compute-0 sudo[140736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:34 compute-0 python3.9[140738]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.j_x3od_7 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:45:34 compute-0 sudo[140736]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:34 compute-0 sudo[140888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mohckllthwunojmvghxepxnslwamxeci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157534.492435-128-173223122442822/AnsiballZ_stat.py'
Nov 26 11:45:34 compute-0 sudo[140888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:34 compute-0 ceph-mon[74928]: pgmap v310: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:45:34 compute-0 python3.9[140890]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:45:34 compute-0 sudo[140888]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:35 compute-0 sudo[140966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmolovecgfdldpawuemcbhzzpxqottlj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157534.492435-128-173223122442822/AnsiballZ_file.py'
Nov 26 11:45:35 compute-0 sudo[140966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:35 compute-0 python3.9[140968]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:45:35 compute-0 sudo[140966]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:35 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v311: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:45:35 compute-0 sudo[141118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtkfccybkyclkfdzqgqnuofgcyentqzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157535.3264804-141-53996012406542/AnsiballZ_command.py'
Nov 26 11:45:35 compute-0 sudo[141118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:35 compute-0 python3.9[141120]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:45:35 compute-0 sudo[141118]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:36 compute-0 sudo[141271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obogkitdbonutqsrciholfqfxzsoxqfq ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764157535.9094932-149-192732819494687/AnsiballZ_edpm_nftables_from_files.py'
Nov 26 11:45:36 compute-0 sudo[141271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:36 compute-0 python3[141273]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 26 11:45:36 compute-0 sudo[141271]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:36 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:45:36 compute-0 sudo[141350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:45:36 compute-0 sudo[141350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:45:36 compute-0 sudo[141350]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:36 compute-0 sudo[141375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:45:36 compute-0 sudo[141375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:45:36 compute-0 sudo[141375]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:36 compute-0 sudo[141423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:45:36 compute-0 sudo[141423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:45:36 compute-0 sudo[141423]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:36 compute-0 sudo[141472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 26 11:45:36 compute-0 sudo[141472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:45:36 compute-0 sudo[141521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptxsrtjkrckhaztxyrdhikjfnlfklwuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157536.4989126-157-3030740541931/AnsiballZ_stat.py'
Nov 26 11:45:36 compute-0 sudo[141521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:36 compute-0 ceph-mon[74928]: pgmap v311: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:45:36 compute-0 python3.9[141525]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:45:36 compute-0 sudo[141521]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:37 compute-0 sudo[141472]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:37 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 11:45:37 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:45:37 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 11:45:37 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 11:45:37 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 11:45:37 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:45:37 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 0ebdf341-0f6a-485b-a98a-69332075364a does not exist
Nov 26 11:45:37 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 0a2b27d5-b2e3-49e0-b557-ec62b4e3203d does not exist
Nov 26 11:45:37 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev da6983a4-1f98-443b-a950-8598d9afec76 does not exist
Nov 26 11:45:37 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 11:45:37 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 11:45:37 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 11:45:37 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 11:45:37 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 11:45:37 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:45:37 compute-0 sudo[141603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:45:37 compute-0 sudo[141603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:45:37 compute-0 sudo[141603]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:37 compute-0 sudo[141628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:45:37 compute-0 sudo[141628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:45:37 compute-0 sudo[141628]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:37 compute-0 sudo[141673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:45:37 compute-0 sudo[141673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:45:37 compute-0 sudo[141673]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:37 compute-0 sudo[141714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 26 11:45:37 compute-0 sudo[141714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:45:37 compute-0 sudo[141776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmxevsvhecsgtepvxichswnmxgvyvkvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157536.4989126-157-3030740541931/AnsiballZ_copy.py'
Nov 26 11:45:37 compute-0 sudo[141776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:37 compute-0 python3.9[141778]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764157536.4989126-157-3030740541931/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:45:37 compute-0 sudo[141776]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:37 compute-0 podman[141811]: 2025-11-26 11:45:37.448111125 +0000 UTC m=+0.028343645 container create ae3b4e5092c569dfb9157baee2b5fb1eb24bf759c6b3245d36276e03344884c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_vaughan, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 26 11:45:37 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v312: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:45:37 compute-0 systemd[1]: Started libpod-conmon-ae3b4e5092c569dfb9157baee2b5fb1eb24bf759c6b3245d36276e03344884c3.scope.
Nov 26 11:45:37 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:45:37 compute-0 podman[141811]: 2025-11-26 11:45:37.505394587 +0000 UTC m=+0.085627126 container init ae3b4e5092c569dfb9157baee2b5fb1eb24bf759c6b3245d36276e03344884c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_vaughan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:45:37 compute-0 podman[141811]: 2025-11-26 11:45:37.510959048 +0000 UTC m=+0.091191568 container start ae3b4e5092c569dfb9157baee2b5fb1eb24bf759c6b3245d36276e03344884c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 11:45:37 compute-0 podman[141811]: 2025-11-26 11:45:37.512218185 +0000 UTC m=+0.092450724 container attach ae3b4e5092c569dfb9157baee2b5fb1eb24bf759c6b3245d36276e03344884c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_vaughan, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 26 11:45:37 compute-0 reverent_vaughan[141848]: 167 167
Nov 26 11:45:37 compute-0 systemd[1]: libpod-ae3b4e5092c569dfb9157baee2b5fb1eb24bf759c6b3245d36276e03344884c3.scope: Deactivated successfully.
Nov 26 11:45:37 compute-0 podman[141811]: 2025-11-26 11:45:37.514967043 +0000 UTC m=+0.095199562 container died ae3b4e5092c569dfb9157baee2b5fb1eb24bf759c6b3245d36276e03344884c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_vaughan, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 26 11:45:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c40a388a57be2d304e49d3625e440fb1b0ac1ca19974ff411743b84f72d11a5-merged.mount: Deactivated successfully.
Nov 26 11:45:37 compute-0 podman[141811]: 2025-11-26 11:45:37.436114825 +0000 UTC m=+0.016347354 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:45:37 compute-0 podman[141811]: 2025-11-26 11:45:37.537110214 +0000 UTC m=+0.117342733 container remove ae3b4e5092c569dfb9157baee2b5fb1eb24bf759c6b3245d36276e03344884c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Nov 26 11:45:37 compute-0 systemd[1]: libpod-conmon-ae3b4e5092c569dfb9157baee2b5fb1eb24bf759c6b3245d36276e03344884c3.scope: Deactivated successfully.
Nov 26 11:45:37 compute-0 podman[141922]: 2025-11-26 11:45:37.651216307 +0000 UTC m=+0.028137405 container create 6f4849255eda4954223c328719bd97dca8f82a60758961f4c280ee983ffd6400 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:45:37 compute-0 systemd[1]: Started libpod-conmon-6f4849255eda4954223c328719bd97dca8f82a60758961f4c280ee983ffd6400.scope.
Nov 26 11:45:37 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:45:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/897142fb21f3d4b0036a569cd56ec4c6358bcd81e6a39c5f4bf34ba9d0ab2b3e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:45:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/897142fb21f3d4b0036a569cd56ec4c6358bcd81e6a39c5f4bf34ba9d0ab2b3e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:45:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/897142fb21f3d4b0036a569cd56ec4c6358bcd81e6a39c5f4bf34ba9d0ab2b3e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:45:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/897142fb21f3d4b0036a569cd56ec4c6358bcd81e6a39c5f4bf34ba9d0ab2b3e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:45:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/897142fb21f3d4b0036a569cd56ec4c6358bcd81e6a39c5f4bf34ba9d0ab2b3e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 11:45:37 compute-0 podman[141922]: 2025-11-26 11:45:37.70403907 +0000 UTC m=+0.080960178 container init 6f4849255eda4954223c328719bd97dca8f82a60758961f4c280ee983ffd6400 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_shaw, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 26 11:45:37 compute-0 podman[141922]: 2025-11-26 11:45:37.708885386 +0000 UTC m=+0.085806484 container start 6f4849255eda4954223c328719bd97dca8f82a60758961f4c280ee983ffd6400 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:45:37 compute-0 podman[141922]: 2025-11-26 11:45:37.710240775 +0000 UTC m=+0.087161872 container attach 6f4849255eda4954223c328719bd97dca8f82a60758961f4c280ee983ffd6400 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_shaw, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:45:37 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:45:37 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 11:45:37 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:45:37 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 11:45:37 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 11:45:37 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:45:37 compute-0 podman[141922]: 2025-11-26 11:45:37.640470948 +0000 UTC m=+0.017392066 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:45:37 compute-0 sudo[142013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qasftwlljbafiqjlvcgwsworsfrefcxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157537.55605-172-174888913056392/AnsiballZ_stat.py'
Nov 26 11:45:37 compute-0 sudo[142013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:37 compute-0 python3.9[142015]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:45:37 compute-0 sudo[142013]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:38 compute-0 sudo[142138]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omqryjkedenchgqlyldndxouukgbspab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157537.55605-172-174888913056392/AnsiballZ_copy.py'
Nov 26 11:45:38 compute-0 sudo[142138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:38 compute-0 python3.9[142140]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764157537.55605-172-174888913056392/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:45:38 compute-0 sudo[142138]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:38 compute-0 naughty_shaw[141953]: --> passed data devices: 0 physical, 3 LVM
Nov 26 11:45:38 compute-0 naughty_shaw[141953]: --> relative data size: 1.0
Nov 26 11:45:38 compute-0 naughty_shaw[141953]: --> All data devices are unavailable
Nov 26 11:45:38 compute-0 systemd[1]: libpod-6f4849255eda4954223c328719bd97dca8f82a60758961f4c280ee983ffd6400.scope: Deactivated successfully.
Nov 26 11:45:38 compute-0 podman[141922]: 2025-11-26 11:45:38.522429152 +0000 UTC m=+0.899350261 container died 6f4849255eda4954223c328719bd97dca8f82a60758961f4c280ee983ffd6400 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_shaw, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:45:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-897142fb21f3d4b0036a569cd56ec4c6358bcd81e6a39c5f4bf34ba9d0ab2b3e-merged.mount: Deactivated successfully.
Nov 26 11:45:38 compute-0 podman[141922]: 2025-11-26 11:45:38.558107122 +0000 UTC m=+0.935028220 container remove 6f4849255eda4954223c328719bd97dca8f82a60758961f4c280ee983ffd6400 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_shaw, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 26 11:45:38 compute-0 systemd[1]: libpod-conmon-6f4849255eda4954223c328719bd97dca8f82a60758961f4c280ee983ffd6400.scope: Deactivated successfully.
Nov 26 11:45:38 compute-0 sudo[141714]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:38 compute-0 sudo[142270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:45:38 compute-0 sudo[142270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:45:38 compute-0 sudo[142270]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:38 compute-0 sudo[142304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:45:38 compute-0 sudo[142304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:45:38 compute-0 sudo[142304]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:38 compute-0 sudo[142393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvoqehzwsohqrrzkrufmtppaojkyokww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157538.4873533-187-223388615585999/AnsiballZ_stat.py'
Nov 26 11:45:38 compute-0 sudo[142393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:38 compute-0 sudo[142356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:45:38 compute-0 sudo[142356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:45:38 compute-0 sudo[142356]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:38 compute-0 ceph-mon[74928]: pgmap v312: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:45:38 compute-0 sudo[142401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -- lvm list --format json
Nov 26 11:45:38 compute-0 sudo[142401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:45:38 compute-0 python3.9[142398]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:45:38 compute-0 sudo[142393]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:38 compute-0 podman[142482]: 2025-11-26 11:45:38.982315787 +0000 UTC m=+0.028701772 container create 7dc1d15eb06ceabbdce31ef5852e0bd33a385697cc1fca91295e5d1ed60e9337 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_nash, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 26 11:45:39 compute-0 systemd[1]: Started libpod-conmon-7dc1d15eb06ceabbdce31ef5852e0bd33a385697cc1fca91295e5d1ed60e9337.scope.
Nov 26 11:45:39 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:45:39 compute-0 podman[142482]: 2025-11-26 11:45:39.032656406 +0000 UTC m=+0.079042401 container init 7dc1d15eb06ceabbdce31ef5852e0bd33a385697cc1fca91295e5d1ed60e9337 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:45:39 compute-0 podman[142482]: 2025-11-26 11:45:39.038847244 +0000 UTC m=+0.085233219 container start 7dc1d15eb06ceabbdce31ef5852e0bd33a385697cc1fca91295e5d1ed60e9337 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:45:39 compute-0 podman[142482]: 2025-11-26 11:45:39.040009937 +0000 UTC m=+0.086395912 container attach 7dc1d15eb06ceabbdce31ef5852e0bd33a385697cc1fca91295e5d1ed60e9337 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 26 11:45:39 compute-0 tender_nash[142519]: 167 167
Nov 26 11:45:39 compute-0 systemd[1]: libpod-7dc1d15eb06ceabbdce31ef5852e0bd33a385697cc1fca91295e5d1ed60e9337.scope: Deactivated successfully.
Nov 26 11:45:39 compute-0 conmon[142519]: conmon 7dc1d15eb06ceabbdce3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7dc1d15eb06ceabbdce31ef5852e0bd33a385697cc1fca91295e5d1ed60e9337.scope/container/memory.events
Nov 26 11:45:39 compute-0 podman[142482]: 2025-11-26 11:45:39.042885982 +0000 UTC m=+0.089271957 container died 7dc1d15eb06ceabbdce31ef5852e0bd33a385697cc1fca91295e5d1ed60e9337 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 26 11:45:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b35d37c442be41fc6e0fc4fb8e111bc3af5da0a946fbf46aaa3d67c92f75b2f-merged.mount: Deactivated successfully.
Nov 26 11:45:39 compute-0 podman[142482]: 2025-11-26 11:45:39.062075179 +0000 UTC m=+0.108461154 container remove 7dc1d15eb06ceabbdce31ef5852e0bd33a385697cc1fca91295e5d1ed60e9337 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_nash, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 26 11:45:39 compute-0 podman[142482]: 2025-11-26 11:45:38.970570752 +0000 UTC m=+0.016956747 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:45:39 compute-0 systemd[1]: libpod-conmon-7dc1d15eb06ceabbdce31ef5852e0bd33a385697cc1fca91295e5d1ed60e9337.scope: Deactivated successfully.
Nov 26 11:45:39 compute-0 sudo[142610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxzacwtiurdaqynpevjjeprhnjklpwub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157538.4873533-187-223388615585999/AnsiballZ_copy.py'
Nov 26 11:45:39 compute-0 sudo[142610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:39 compute-0 podman[142616]: 2025-11-26 11:45:39.182539103 +0000 UTC m=+0.027351945 container create 378763919ad2b63b188881c28234ca7e4e764ab389a2bc7d69d656d4da703fc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 26 11:45:39 compute-0 systemd[1]: Started libpod-conmon-378763919ad2b63b188881c28234ca7e4e764ab389a2bc7d69d656d4da703fc3.scope.
Nov 26 11:45:39 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:45:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68a590fea1d242ee118f59ec98efe68accf7a40dba781c1a4cf54e6ec244baaa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:45:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68a590fea1d242ee118f59ec98efe68accf7a40dba781c1a4cf54e6ec244baaa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:45:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68a590fea1d242ee118f59ec98efe68accf7a40dba781c1a4cf54e6ec244baaa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:45:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68a590fea1d242ee118f59ec98efe68accf7a40dba781c1a4cf54e6ec244baaa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:45:39 compute-0 podman[142616]: 2025-11-26 11:45:39.239497645 +0000 UTC m=+0.084310497 container init 378763919ad2b63b188881c28234ca7e4e764ab389a2bc7d69d656d4da703fc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_archimedes, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 26 11:45:39 compute-0 podman[142616]: 2025-11-26 11:45:39.244696792 +0000 UTC m=+0.089509635 container start 378763919ad2b63b188881c28234ca7e4e764ab389a2bc7d69d656d4da703fc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_archimedes, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 26 11:45:39 compute-0 podman[142616]: 2025-11-26 11:45:39.245875576 +0000 UTC m=+0.090688418 container attach 378763919ad2b63b188881c28234ca7e4e764ab389a2bc7d69d656d4da703fc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 26 11:45:39 compute-0 podman[142616]: 2025-11-26 11:45:39.171128018 +0000 UTC m=+0.015940880 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:45:39 compute-0 python3.9[142618]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764157538.4873533-187-223388615585999/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:45:39 compute-0 sudo[142610]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:39 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v313: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:45:39 compute-0 sudo[142784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhoeicgrrhienwekozdqqtpitbyvtwhx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157539.4336672-202-195621997729359/AnsiballZ_stat.py'
Nov 26 11:45:39 compute-0 sudo[142784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:39 compute-0 python3.9[142786]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:45:39 compute-0 sudo[142784]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]: {
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:     "0": [
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:         {
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:             "devices": [
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:                 "/dev/loop3"
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:             ],
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:             "lv_name": "ceph_lv0",
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:             "lv_size": "21470642176",
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a9ad59a0-aa2e-4d92-b571-519d2d145b6a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:             "lv_uuid": "Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ",
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:             "name": "ceph_lv0",
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:             "tags": {
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:                 "ceph.block_uuid": "Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ",
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:                 "ceph.cluster_name": "ceph",
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:                 "ceph.crush_device_class": "",
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:                 "ceph.encrypted": "0",
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:                 "ceph.osd_fsid": "a9ad59a0-aa2e-4d92-b571-519d2d145b6a",
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:                 "ceph.osd_id": "0",
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:                 "ceph.type": "block",
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:                 "ceph.vdo": "0"
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:             },
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:             "type": "block",
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:             "vg_name": "ceph_vg0"
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:         }
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:     ],
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:     "1": [
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:         {
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:             "devices": [
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:                 "/dev/loop4"
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:             ],
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:             "lv_name": "ceph_lv1",
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:             "lv_size": "21470642176",
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2627095b-eef8-4027-bfef-68bf7cb6801f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:             "lv_uuid": "T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS",
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:             "name": "ceph_lv1",
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:             "tags": {
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:                 "ceph.block_uuid": "T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS",
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:                 "ceph.cluster_name": "ceph",
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:                 "ceph.crush_device_class": "",
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:                 "ceph.encrypted": "0",
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:                 "ceph.osd_fsid": "2627095b-eef8-4027-bfef-68bf7cb6801f",
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:                 "ceph.osd_id": "1",
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:                 "ceph.type": "block",
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:                 "ceph.vdo": "0"
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:             },
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:             "type": "block",
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:             "vg_name": "ceph_vg1"
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:         }
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:     ],
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:     "2": [
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:         {
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:             "devices": [
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:                 "/dev/loop5"
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:             ],
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:             "lv_name": "ceph_lv2",
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:             "lv_size": "21470642176",
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d56156fb-7361-4bef-b06b-1320109b4323,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:             "lv_uuid": "PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF",
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:             "name": "ceph_lv2",
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:             "tags": {
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:                 "ceph.block_uuid": "PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF",
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:                 "ceph.cluster_name": "ceph",
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:                 "ceph.crush_device_class": "",
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:                 "ceph.encrypted": "0",
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:                 "ceph.osd_fsid": "d56156fb-7361-4bef-b06b-1320109b4323",
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:                 "ceph.osd_id": "2",
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:                 "ceph.type": "block",
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:                 "ceph.vdo": "0"
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:             },
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:             "type": "block",
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:             "vg_name": "ceph_vg2"
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:         }
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]:     ]
Nov 26 11:45:39 compute-0 naughty_archimedes[142630]: }
Nov 26 11:45:39 compute-0 systemd[1]: libpod-378763919ad2b63b188881c28234ca7e4e764ab389a2bc7d69d656d4da703fc3.scope: Deactivated successfully.
Nov 26 11:45:39 compute-0 conmon[142630]: conmon 378763919ad2b63b1888 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-378763919ad2b63b188881c28234ca7e4e764ab389a2bc7d69d656d4da703fc3.scope/container/memory.events
Nov 26 11:45:39 compute-0 podman[142616]: 2025-11-26 11:45:39.920957729 +0000 UTC m=+0.765770571 container died 378763919ad2b63b188881c28234ca7e4e764ab389a2bc7d69d656d4da703fc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Nov 26 11:45:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-68a590fea1d242ee118f59ec98efe68accf7a40dba781c1a4cf54e6ec244baaa-merged.mount: Deactivated successfully.
Nov 26 11:45:39 compute-0 podman[142616]: 2025-11-26 11:45:39.95953557 +0000 UTC m=+0.804348412 container remove 378763919ad2b63b188881c28234ca7e4e764ab389a2bc7d69d656d4da703fc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_archimedes, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 26 11:45:39 compute-0 systemd[1]: libpod-conmon-378763919ad2b63b188881c28234ca7e4e764ab389a2bc7d69d656d4da703fc3.scope: Deactivated successfully.
Nov 26 11:45:39 compute-0 sudo[142401]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:40 compute-0 sudo[142850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:45:40 compute-0 sudo[142850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:45:40 compute-0 sudo[142850]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:40 compute-0 sudo[142898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:45:40 compute-0 sudo[142898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:45:40 compute-0 sudo[142898]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:40 compute-0 sudo[142947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:45:40 compute-0 sudo[142947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:45:40 compute-0 sudo[142947]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:40 compute-0 sudo[142997]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iaqhdpkfkxsxjaglyqeyhlkawmybsbxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157539.4336672-202-195621997729359/AnsiballZ_copy.py'
Nov 26 11:45:40 compute-0 sudo[142997]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:40 compute-0 sudo[142999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -- raw list --format json
Nov 26 11:45:40 compute-0 sudo[142999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:45:40 compute-0 python3.9[143004]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764157539.4336672-202-195621997729359/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:45:40 compute-0 sudo[142997]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:40 compute-0 podman[143081]: 2025-11-26 11:45:40.394482249 +0000 UTC m=+0.026705777 container create 8000e97c53eb0c4a6b277407930ed5252e85420fb3b09a84eb634e045fc22a36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 26 11:45:40 compute-0 systemd[1]: Started libpod-conmon-8000e97c53eb0c4a6b277407930ed5252e85420fb3b09a84eb634e045fc22a36.scope.
Nov 26 11:45:40 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:45:40 compute-0 podman[143081]: 2025-11-26 11:45:40.447436228 +0000 UTC m=+0.079659766 container init 8000e97c53eb0c4a6b277407930ed5252e85420fb3b09a84eb634e045fc22a36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_albattani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:45:40 compute-0 podman[143081]: 2025-11-26 11:45:40.452559501 +0000 UTC m=+0.084783030 container start 8000e97c53eb0c4a6b277407930ed5252e85420fb3b09a84eb634e045fc22a36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_albattani, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:45:40 compute-0 podman[143081]: 2025-11-26 11:45:40.453738596 +0000 UTC m=+0.085962125 container attach 8000e97c53eb0c4a6b277407930ed5252e85420fb3b09a84eb634e045fc22a36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_albattani, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 26 11:45:40 compute-0 quirky_albattani[143118]: 167 167
Nov 26 11:45:40 compute-0 podman[143081]: 2025-11-26 11:45:40.456249743 +0000 UTC m=+0.088473271 container died 8000e97c53eb0c4a6b277407930ed5252e85420fb3b09a84eb634e045fc22a36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_albattani, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:45:40 compute-0 systemd[1]: libpod-8000e97c53eb0c4a6b277407930ed5252e85420fb3b09a84eb634e045fc22a36.scope: Deactivated successfully.
Nov 26 11:45:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-b6bd53cdb382c1339cc5b3e32e6b823b981fd2ef462037666f523b7b95ab75bd-merged.mount: Deactivated successfully.
Nov 26 11:45:40 compute-0 podman[143081]: 2025-11-26 11:45:40.383760104 +0000 UTC m=+0.015983642 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:45:40 compute-0 podman[143081]: 2025-11-26 11:45:40.482393619 +0000 UTC m=+0.114617147 container remove 8000e97c53eb0c4a6b277407930ed5252e85420fb3b09a84eb634e045fc22a36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_albattani, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True)
Nov 26 11:45:40 compute-0 systemd[1]: libpod-conmon-8000e97c53eb0c4a6b277407930ed5252e85420fb3b09a84eb634e045fc22a36.scope: Deactivated successfully.
Nov 26 11:45:40 compute-0 podman[143169]: 2025-11-26 11:45:40.597523611 +0000 UTC m=+0.031294643 container create 86cf9d9e8939c45665c4d11672332ec49bf62b849befa5eb865c71f1c7560272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ishizaka, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:45:40 compute-0 systemd[1]: Started libpod-conmon-86cf9d9e8939c45665c4d11672332ec49bf62b849befa5eb865c71f1c7560272.scope.
Nov 26 11:45:40 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:45:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/115bd9bc98437cad2d6455af0b8a542fbf3206e6d53bd828db723eae8deed6dc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:45:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/115bd9bc98437cad2d6455af0b8a542fbf3206e6d53bd828db723eae8deed6dc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:45:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/115bd9bc98437cad2d6455af0b8a542fbf3206e6d53bd828db723eae8deed6dc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:45:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/115bd9bc98437cad2d6455af0b8a542fbf3206e6d53bd828db723eae8deed6dc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:45:40 compute-0 podman[143169]: 2025-11-26 11:45:40.653743329 +0000 UTC m=+0.087514371 container init 86cf9d9e8939c45665c4d11672332ec49bf62b849befa5eb865c71f1c7560272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ishizaka, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:45:40 compute-0 podman[143169]: 2025-11-26 11:45:40.659257351 +0000 UTC m=+0.093028393 container start 86cf9d9e8939c45665c4d11672332ec49bf62b849befa5eb865c71f1c7560272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ishizaka, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:45:40 compute-0 podman[143169]: 2025-11-26 11:45:40.660441865 +0000 UTC m=+0.094212897 container attach 86cf9d9e8939c45665c4d11672332ec49bf62b849befa5eb865c71f1c7560272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ishizaka, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:45:40 compute-0 podman[143169]: 2025-11-26 11:45:40.58321498 +0000 UTC m=+0.016986032 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:45:40 compute-0 sudo[143261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-foautnjmcyxodyfgdxcaffqjkmxhwpac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157540.4258444-217-265969414749089/AnsiballZ_stat.py'
Nov 26 11:45:40 compute-0 sudo[143261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:40 compute-0 ceph-mon[74928]: pgmap v313: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:45:40 compute-0 python3.9[143263]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:45:40 compute-0 sudo[143261]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:41 compute-0 sudo[143386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qckwatibyakkxsfeofuvkibwjeznvnuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157540.4258444-217-265969414749089/AnsiballZ_copy.py'
Nov 26 11:45:41 compute-0 sudo[143386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:41 compute-0 python3.9[143388]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764157540.4258444-217-265969414749089/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:45:41 compute-0 sudo[143386]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:41 compute-0 ceph-mgr[75197]: [balancer INFO root] Optimize plan auto_2025-11-26_11:45:41
Nov 26 11:45:41 compute-0 ceph-mgr[75197]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 11:45:41 compute-0 ceph-mgr[75197]: [balancer INFO root] do_upmap
Nov 26 11:45:41 compute-0 ceph-mgr[75197]: [balancer INFO root] pools ['default.rgw.control', 'images', 'cephfs.cephfs.meta', '.mgr', 'volumes', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.log', '.rgw.root', 'vms', 'backups']
Nov 26 11:45:41 compute-0 ceph-mgr[75197]: [balancer INFO root] prepared 0/10 changes
Nov 26 11:45:41 compute-0 zealous_ishizaka[143213]: {
Nov 26 11:45:41 compute-0 zealous_ishizaka[143213]:     "2627095b-eef8-4027-bfef-68bf7cb6801f": {
Nov 26 11:45:41 compute-0 zealous_ishizaka[143213]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:45:41 compute-0 zealous_ishizaka[143213]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 11:45:41 compute-0 zealous_ishizaka[143213]:         "osd_id": 1,
Nov 26 11:45:41 compute-0 zealous_ishizaka[143213]:         "osd_uuid": "2627095b-eef8-4027-bfef-68bf7cb6801f",
Nov 26 11:45:41 compute-0 zealous_ishizaka[143213]:         "type": "bluestore"
Nov 26 11:45:41 compute-0 zealous_ishizaka[143213]:     },
Nov 26 11:45:41 compute-0 zealous_ishizaka[143213]:     "a9ad59a0-aa2e-4d92-b571-519d2d145b6a": {
Nov 26 11:45:41 compute-0 zealous_ishizaka[143213]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:45:41 compute-0 zealous_ishizaka[143213]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 11:45:41 compute-0 zealous_ishizaka[143213]:         "osd_id": 0,
Nov 26 11:45:41 compute-0 zealous_ishizaka[143213]:         "osd_uuid": "a9ad59a0-aa2e-4d92-b571-519d2d145b6a",
Nov 26 11:45:41 compute-0 zealous_ishizaka[143213]:         "type": "bluestore"
Nov 26 11:45:41 compute-0 zealous_ishizaka[143213]:     },
Nov 26 11:45:41 compute-0 zealous_ishizaka[143213]:     "d56156fb-7361-4bef-b06b-1320109b4323": {
Nov 26 11:45:41 compute-0 zealous_ishizaka[143213]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:45:41 compute-0 zealous_ishizaka[143213]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 11:45:41 compute-0 zealous_ishizaka[143213]:         "osd_id": 2,
Nov 26 11:45:41 compute-0 zealous_ishizaka[143213]:         "osd_uuid": "d56156fb-7361-4bef-b06b-1320109b4323",
Nov 26 11:45:41 compute-0 zealous_ishizaka[143213]:         "type": "bluestore"
Nov 26 11:45:41 compute-0 zealous_ishizaka[143213]:     }
Nov 26 11:45:41 compute-0 zealous_ishizaka[143213]: }
Nov 26 11:45:41 compute-0 systemd[1]: libpod-86cf9d9e8939c45665c4d11672332ec49bf62b849befa5eb865c71f1c7560272.scope: Deactivated successfully.
Nov 26 11:45:41 compute-0 podman[143169]: 2025-11-26 11:45:41.415492415 +0000 UTC m=+0.849263447 container died 86cf9d9e8939c45665c4d11672332ec49bf62b849befa5eb865c71f1c7560272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 26 11:45:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:45:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:45:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:45:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:45:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:45:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:45:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-115bd9bc98437cad2d6455af0b8a542fbf3206e6d53bd828db723eae8deed6dc-merged.mount: Deactivated successfully.
Nov 26 11:45:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 11:45:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 11:45:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 11:45:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 11:45:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 11:45:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 11:45:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 11:45:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 11:45:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 11:45:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 11:45:41 compute-0 podman[143169]: 2025-11-26 11:45:41.447486145 +0000 UTC m=+0.881257177 container remove 86cf9d9e8939c45665c4d11672332ec49bf62b849befa5eb865c71f1c7560272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 26 11:45:41 compute-0 systemd[1]: libpod-conmon-86cf9d9e8939c45665c4d11672332ec49bf62b849befa5eb865c71f1c7560272.scope: Deactivated successfully.
Nov 26 11:45:41 compute-0 sudo[142999]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:41 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 11:45:41 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v314: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:45:41 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:45:41 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 11:45:41 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:45:41 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev d91c5016-a607-4604-8654-1a732851f985 does not exist
Nov 26 11:45:41 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 3bf2b9df-88bc-481e-b849-86f195b9d6be does not exist
Nov 26 11:45:41 compute-0 sudo[143503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:45:41 compute-0 sudo[143503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:45:41 compute-0 sudo[143503]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:41 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:45:41 compute-0 sudo[143551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 26 11:45:41 compute-0 sudo[143551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:45:41 compute-0 sudo[143551]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:41 compute-0 sudo[143626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avnhgwjqmzaewucaxhcpzdbfydcwegxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157541.4188025-232-127691362280020/AnsiballZ_file.py'
Nov 26 11:45:41 compute-0 sudo[143626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:41 compute-0 python3.9[143628]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:45:41 compute-0 sudo[143626]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:42 compute-0 sudo[143778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xloxfpvylhejmvtniyxbyncmdshblkok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157541.9404228-240-269353617162505/AnsiballZ_command.py'
Nov 26 11:45:42 compute-0 sudo[143778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:42 compute-0 python3.9[143780]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:45:42 compute-0 sudo[143778]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:42 compute-0 ceph-mon[74928]: pgmap v314: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:45:42 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:45:42 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:45:42 compute-0 sudo[143933]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zygiwbieezpyivnxelzxwbffsopfadfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157542.468677-248-71579056601486/AnsiballZ_blockinfile.py'
Nov 26 11:45:42 compute-0 sudo[143933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:42 compute-0 python3.9[143935]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:45:42 compute-0 sudo[143933]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:43 compute-0 sudo[144085]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmrqvlqucjozpumdcdniuzqrjmzveige ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157543.0930488-257-131182952216264/AnsiballZ_command.py'
Nov 26 11:45:43 compute-0 sudo[144085]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:43 compute-0 python3.9[144087]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:45:43 compute-0 sudo[144085]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:43 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v315: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:45:43 compute-0 sudo[144238]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vatgrbnxsumujekrgkcvuzcwuesugbdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157543.562406-265-170758711770966/AnsiballZ_stat.py'
Nov 26 11:45:43 compute-0 sudo[144238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:43 compute-0 python3.9[144240]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 11:45:43 compute-0 sudo[144238]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:44 compute-0 sudo[144392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umrsenfjfljnolyikzwkyjemuqxctkwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157544.0376766-273-139075822252545/AnsiballZ_command.py'
Nov 26 11:45:44 compute-0 sudo[144392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:44 compute-0 python3.9[144394]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:45:44 compute-0 sudo[144392]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:44 compute-0 ceph-mon[74928]: pgmap v315: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:45:44 compute-0 sudo[144547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avsqfejqimubdqnurjujgubundzjckne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157544.510305-281-138592577947471/AnsiballZ_file.py'
Nov 26 11:45:44 compute-0 sudo[144547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:44 compute-0 python3.9[144549]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:45:44 compute-0 sudo[144547]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:45 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v316: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:45:45 compute-0 python3.9[144699]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 11:45:46 compute-0 sudo[144850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nprbgtadpsajdqvertoswalwwhjrijec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157546.2618158-321-186751522366431/AnsiballZ_command.py'
Nov 26 11:45:46 compute-0 sudo[144850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:46 compute-0 ceph-mon[74928]: pgmap v316: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:45:46 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:45:46 compute-0 python3.9[144852]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:0e:0a:c6:22:5a:f7" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:45:46 compute-0 ovs-vsctl[144853]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:0e:0a:c6:22:5a:f7 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Nov 26 11:45:46 compute-0 sudo[144850]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:46 compute-0 sudo[145003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgbudhnqnfrmrakwjguhofueetkepvpg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157546.74008-330-138589678935062/AnsiballZ_command.py'
Nov 26 11:45:46 compute-0 sudo[145003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:47 compute-0 python3.9[145005]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ovs-vsctl show | grep -q "Manager"
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:45:47 compute-0 sudo[145003]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:47 compute-0 sudo[145158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwdenylarxyiqktmoaftwacuncpliyqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157547.198597-338-113283136012369/AnsiballZ_command.py'
Nov 26 11:45:47 compute-0 sudo[145158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:47 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v317: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:45:47 compute-0 python3.9[145160]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:45:47 compute-0 ovs-vsctl[145161]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Nov 26 11:45:47 compute-0 sudo[145158]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:48 compute-0 python3.9[145311]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 11:45:48 compute-0 sudo[145463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvhhfjdowaskagefchxdogvujqgrixts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157548.1948385-355-261321926887947/AnsiballZ_file.py'
Nov 26 11:45:48 compute-0 sudo[145463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:48 compute-0 ceph-mon[74928]: pgmap v317: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:45:48 compute-0 python3.9[145465]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:45:48 compute-0 sudo[145463]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:48 compute-0 sudo[145615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzkdivqsxlbyfhvzkjaijhkpbkvdefld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157548.6551442-363-85327093005038/AnsiballZ_stat.py'
Nov 26 11:45:48 compute-0 sudo[145615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:48 compute-0 sshd-session[71318]: Received disconnect from 192.168.26.201 port 45718:11: disconnected by user
Nov 26 11:45:48 compute-0 sshd-session[71318]: Disconnected from user zuul 192.168.26.201 port 45718
Nov 26 11:45:48 compute-0 sshd-session[71315]: pam_unix(sshd:session): session closed for user zuul
Nov 26 11:45:48 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Nov 26 11:45:48 compute-0 systemd[1]: session-17.scope: Consumed 1min 1.235s CPU time.
Nov 26 11:45:48 compute-0 systemd-logind[744]: Session 17 logged out. Waiting for processes to exit.
Nov 26 11:45:48 compute-0 systemd-logind[744]: Removed session 17.
Nov 26 11:45:48 compute-0 python3.9[145617]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:45:49 compute-0 sudo[145615]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:49 compute-0 sudo[145693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xggeqgpqclwjpcwmgjszrzidoccoxxbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157548.6551442-363-85327093005038/AnsiballZ_file.py'
Nov 26 11:45:49 compute-0 sudo[145693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:49 compute-0 python3.9[145695]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:45:49 compute-0 sudo[145693]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:49 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v318: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:45:49 compute-0 sudo[145845]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zaizzupwscyitpvvbvdnrthuldvgmctp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157549.4169772-363-255365271481660/AnsiballZ_stat.py'
Nov 26 11:45:49 compute-0 sudo[145845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:49 compute-0 python3.9[145847]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:45:49 compute-0 sudo[145845]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:49 compute-0 sudo[145923]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhdnkuregoxggghxzabfsfmizboknihs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157549.4169772-363-255365271481660/AnsiballZ_file.py'
Nov 26 11:45:49 compute-0 sudo[145923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:50 compute-0 python3.9[145925]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:45:50 compute-0 sudo[145923]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:50 compute-0 sudo[146075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjsvcmtfephjwjrqfmxwkafglipjarhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157550.2859492-386-232121309487926/AnsiballZ_file.py'
Nov 26 11:45:50 compute-0 sudo[146075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:50 compute-0 ceph-mon[74928]: pgmap v318: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:45:50 compute-0 python3.9[146077]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:45:50 compute-0 sudo[146075]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 11:45:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:45:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 11:45:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:45:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:45:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:45:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:45:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:45:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:45:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:45:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:45:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:45:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 26 11:45:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:45:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:45:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:45:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 11:45:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:45:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 11:45:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:45:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:45:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:45:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 11:45:50 compute-0 sudo[146227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrcsjvrbtojjrsnnnuzmlfywoumzkjbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157550.7472835-394-155402616635462/AnsiballZ_stat.py'
Nov 26 11:45:50 compute-0 sudo[146227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:51 compute-0 python3.9[146229]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:45:51 compute-0 sudo[146227]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:51 compute-0 sudo[146305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqvbkscwyruqjnkbtswegztrntjbypav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157550.7472835-394-155402616635462/AnsiballZ_file.py'
Nov 26 11:45:51 compute-0 sudo[146305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:51 compute-0 python3.9[146307]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:45:51 compute-0 sudo[146305]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:51 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v319: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:45:51 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:45:51 compute-0 sudo[146457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkapzedsxeywjlqqkkeruvakkwqwolru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157551.5251942-406-18591663061525/AnsiballZ_stat.py'
Nov 26 11:45:51 compute-0 sudo[146457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:51 compute-0 python3.9[146459]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:45:51 compute-0 sudo[146457]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:52 compute-0 sudo[146535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snniagtklawvzcreciigfoxypxxbwzpg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157551.5251942-406-18591663061525/AnsiballZ_file.py'
Nov 26 11:45:52 compute-0 sudo[146535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:52 compute-0 python3.9[146537]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:45:52 compute-0 sudo[146535]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:52 compute-0 sudo[146687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jaojfafljgkzbhqsrbhjuzmwpvrgmxbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157552.2932-418-119037247929471/AnsiballZ_systemd.py'
Nov 26 11:45:52 compute-0 sudo[146687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:52 compute-0 ceph-mon[74928]: pgmap v319: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:45:52 compute-0 python3.9[146689]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 11:45:52 compute-0 systemd[1]: Reloading.
Nov 26 11:45:52 compute-0 systemd-rc-local-generator[146709]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:45:52 compute-0 systemd-sysv-generator[146713]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:45:52 compute-0 sudo[146687]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:53 compute-0 sudo[146875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrofhxehkfdxvhrwvuqikabiblahlfqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157553.0818217-426-146806160598533/AnsiballZ_stat.py'
Nov 26 11:45:53 compute-0 sudo[146875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:53 compute-0 python3.9[146877]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:45:53 compute-0 sudo[146875]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:53 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v320: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:45:53 compute-0 sudo[146953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khhlgpkuxmfweqxhchtfxwrbphicfmhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157553.0818217-426-146806160598533/AnsiballZ_file.py'
Nov 26 11:45:53 compute-0 sudo[146953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:53 compute-0 python3.9[146955]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:45:53 compute-0 sudo[146953]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:54 compute-0 sudo[147105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtujeuhixdcfesejckyhzgemmvjnubjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157553.8784218-438-37702999020370/AnsiballZ_stat.py'
Nov 26 11:45:54 compute-0 sudo[147105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:54 compute-0 python3.9[147107]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:45:54 compute-0 sudo[147105]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:54 compute-0 sudo[147183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymgodsovlyhijcuilehusnxiswkzthye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157553.8784218-438-37702999020370/AnsiballZ_file.py'
Nov 26 11:45:54 compute-0 sudo[147183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:54 compute-0 ceph-mon[74928]: pgmap v320: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:45:54 compute-0 python3.9[147185]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:45:54 compute-0 sudo[147183]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:54 compute-0 sudo[147335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmmhlthgopamjsmpiwqihfeucsqhxpvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157554.6551502-450-147004566833434/AnsiballZ_systemd.py'
Nov 26 11:45:54 compute-0 sudo[147335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:55 compute-0 python3.9[147337]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 11:45:55 compute-0 systemd[1]: Reloading.
Nov 26 11:45:55 compute-0 systemd-sysv-generator[147363]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:45:55 compute-0 systemd-rc-local-generator[147360]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:45:55 compute-0 systemd[1]: Starting Create netns directory...
Nov 26 11:45:55 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 26 11:45:55 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 26 11:45:55 compute-0 systemd[1]: Finished Create netns directory.
Nov 26 11:45:55 compute-0 sudo[147335]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:55 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v321: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:45:55 compute-0 sudo[147528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwqlvdujepxvbqmnmcbvqxynfnqahavq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157555.557327-460-205723097214883/AnsiballZ_file.py'
Nov 26 11:45:55 compute-0 sudo[147528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:55 compute-0 python3.9[147530]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:45:55 compute-0 sudo[147528]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:56 compute-0 sudo[147680]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvvubjtbcehmhimghenaickzkqcapkcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157556.0381746-468-226792360556667/AnsiballZ_stat.py'
Nov 26 11:45:56 compute-0 sudo[147680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:56 compute-0 python3.9[147682]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:45:56 compute-0 sudo[147680]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:56 compute-0 ceph-mon[74928]: pgmap v321: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:45:56 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:45:56 compute-0 sudo[147803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqnhaoseuphlthqtwqixjldfbjetmdkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157556.0381746-468-226792360556667/AnsiballZ_copy.py'
Nov 26 11:45:56 compute-0 sudo[147803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:56 compute-0 python3.9[147805]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764157556.0381746-468-226792360556667/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:45:56 compute-0 sudo[147803]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:57 compute-0 sudo[147955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rclfecfxqfbxybkesbsmurogaylzypzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157557.0228732-485-207392483845723/AnsiballZ_file.py'
Nov 26 11:45:57 compute-0 sudo[147955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:57 compute-0 python3.9[147957]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:45:57 compute-0 sudo[147955]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:57 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v322: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:45:57 compute-0 sudo[148107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdppseclcmchoiesuiexnpfgmumpfkku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157557.535451-493-97596811568593/AnsiballZ_stat.py'
Nov 26 11:45:57 compute-0 sudo[148107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:57 compute-0 python3.9[148109]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:45:57 compute-0 sudo[148107]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:58 compute-0 sudo[148230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pponukddztvlaxjzxcakxrggkponnqwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157557.535451-493-97596811568593/AnsiballZ_copy.py'
Nov 26 11:45:58 compute-0 sudo[148230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:58 compute-0 python3.9[148232]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764157557.535451-493-97596811568593/.source.json _original_basename=.lr3mmpfr follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:45:58 compute-0 sudo[148230]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:58 compute-0 ceph-mon[74928]: pgmap v322: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:45:58 compute-0 sudo[148382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikegsblnwuhrpwrpwspfullkusffxyod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157558.3609242-508-12954761566510/AnsiballZ_file.py'
Nov 26 11:45:58 compute-0 sudo[148382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:58 compute-0 python3.9[148384]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:45:58 compute-0 sudo[148382]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:59 compute-0 sudo[148534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjoodqtksxofsubufoeecufhnlqbalkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157558.8545153-516-127162538521447/AnsiballZ_stat.py'
Nov 26 11:45:59 compute-0 sudo[148534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:59 compute-0 sudo[148534]: pam_unix(sudo:session): session closed for user root
Nov 26 11:45:59 compute-0 sudo[148657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxmvzqzskfeygjjlyidnesyigvtgivef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157558.8545153-516-127162538521447/AnsiballZ_copy.py'
Nov 26 11:45:59 compute-0 sudo[148657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:45:59 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v323: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:45:59 compute-0 sudo[148657]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:00 compute-0 sudo[148809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkzlagyraywhacvufojvudvxxyumyjlp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157559.8090415-533-200086031063814/AnsiballZ_container_config_data.py'
Nov 26 11:46:00 compute-0 sudo[148809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:46:00 compute-0 python3.9[148811]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Nov 26 11:46:00 compute-0 sudo[148809]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:00 compute-0 ceph-mon[74928]: pgmap v323: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:46:00 compute-0 sudo[148961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvygnacndfxbyrlgmiuwhjbygwfhhezr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157560.4397554-542-262298934466232/AnsiballZ_container_config_hash.py'
Nov 26 11:46:00 compute-0 sudo[148961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:46:00 compute-0 python3.9[148963]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 26 11:46:00 compute-0 sudo[148961]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:01 compute-0 sudo[149113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejgbfjqputvihstpfimsrkgbmnclszdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157561.0567808-551-189677710339215/AnsiballZ_podman_container_info.py'
Nov 26 11:46:01 compute-0 sudo[149113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:46:01 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v324: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:46:01 compute-0 python3.9[149115]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 26 11:46:01 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:46:01 compute-0 sudo[149113]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:02 compute-0 sudo[149283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jffoqclpheyjezbhjvnhvppnywncicmh ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764157562.0862927-564-144657578491375/AnsiballZ_edpm_container_manage.py'
Nov 26 11:46:02 compute-0 sudo[149283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:46:02 compute-0 ceph-mon[74928]: pgmap v324: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:46:02 compute-0 python3[149285]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 26 11:46:03 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v325: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:46:04 compute-0 ceph-mon[74928]: pgmap v325: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:46:05 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v326: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:46:06 compute-0 ceph-mon[74928]: pgmap v326: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:46:06 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:46:07 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v327: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:46:07 compute-0 podman[149296]: 2025-11-26 11:46:07.559390235 +0000 UTC m=+4.871466831 image pull 197857ba4b35dfe0da58eb2e9c37f91c8a1d2b66c0967b4c66656aa6329b870c quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e
Nov 26 11:46:07 compute-0 podman[149398]: 2025-11-26 11:46:07.652725936 +0000 UTC m=+0.028034313 container create cfbeb1268b73bea59211ec8c025ee3818e11127880f6c0675b5fbea39bdd577e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller)
Nov 26 11:46:07 compute-0 podman[149398]: 2025-11-26 11:46:07.639058736 +0000 UTC m=+0.014367123 image pull 197857ba4b35dfe0da58eb2e9c37f91c8a1d2b66c0967b4c66656aa6329b870c quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e
Nov 26 11:46:07 compute-0 python3[149285]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e
Nov 26 11:46:07 compute-0 sudo[149283]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:08 compute-0 sudo[149576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqmomsnbhiwuzgneqsqcfgtrhwmiogre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157567.8601594-572-123770562881487/AnsiballZ_stat.py'
Nov 26 11:46:08 compute-0 sudo[149576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:46:08 compute-0 python3.9[149578]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 11:46:08 compute-0 sudo[149576]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:08 compute-0 ceph-mon[74928]: pgmap v327: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:46:08 compute-0 sudo[149730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvittapffnpowtjtzgzhcmvkdzcxvcov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157568.382527-581-13664961830282/AnsiballZ_file.py'
Nov 26 11:46:08 compute-0 sudo[149730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:46:08 compute-0 python3.9[149732]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:46:08 compute-0 sudo[149730]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:08 compute-0 sudo[149806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twuupufunlvthscqkhikviboqrzjnmxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157568.382527-581-13664961830282/AnsiballZ_stat.py'
Nov 26 11:46:08 compute-0 sudo[149806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:46:09 compute-0 python3.9[149808]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 11:46:09 compute-0 sudo[149806]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:09 compute-0 sudo[149957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbqeteytretgbmtopfsgjtltwpscvsph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157569.0660307-581-121617452834958/AnsiballZ_copy.py'
Nov 26 11:46:09 compute-0 sudo[149957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:46:09 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v328: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:46:09 compute-0 python3.9[149959]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764157569.0660307-581-121617452834958/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:46:09 compute-0 sudo[149957]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:09 compute-0 sudo[150033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rukcolrqgwogdmaofirryrzumikvkmsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157569.0660307-581-121617452834958/AnsiballZ_systemd.py'
Nov 26 11:46:09 compute-0 sudo[150033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:46:09 compute-0 python3.9[150035]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 26 11:46:09 compute-0 systemd[1]: Reloading.
Nov 26 11:46:10 compute-0 systemd-sysv-generator[150059]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:46:10 compute-0 systemd-rc-local-generator[150055]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:46:10 compute-0 sudo[150033]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:10 compute-0 sudo[150144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-euotfyhilypnwecwmekbvntqmzytppyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157569.0660307-581-121617452834958/AnsiballZ_systemd.py'
Nov 26 11:46:10 compute-0 sudo[150144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:46:10 compute-0 ceph-mon[74928]: pgmap v328: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:46:10 compute-0 python3.9[150146]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 11:46:10 compute-0 systemd[1]: Reloading.
Nov 26 11:46:10 compute-0 systemd-sysv-generator[150174]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:46:10 compute-0 systemd-rc-local-generator[150171]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:46:10 compute-0 systemd[1]: Starting ovn_controller container...
Nov 26 11:46:10 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:46:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/066ead06050a5cc200cacac9d39a6499c95b869cb83552ab2e60d43dc81feabb/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Nov 26 11:46:10 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run cfbeb1268b73bea59211ec8c025ee3818e11127880f6c0675b5fbea39bdd577e.
Nov 26 11:46:10 compute-0 podman[150187]: 2025-11-26 11:46:10.903675958 +0000 UTC m=+0.079298794 container init cfbeb1268b73bea59211ec8c025ee3818e11127880f6c0675b5fbea39bdd577e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 26 11:46:10 compute-0 ovn_controller[150199]: + sudo -E kolla_set_configs
Nov 26 11:46:10 compute-0 podman[150187]: 2025-11-26 11:46:10.920024727 +0000 UTC m=+0.095647553 container start cfbeb1268b73bea59211ec8c025ee3818e11127880f6c0675b5fbea39bdd577e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 26 11:46:10 compute-0 edpm-start-podman-container[150187]: ovn_controller
Nov 26 11:46:10 compute-0 systemd[1]: Created slice User Slice of UID 0.
Nov 26 11:46:10 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Nov 26 11:46:10 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Nov 26 11:46:10 compute-0 systemd[1]: Starting User Manager for UID 0...
Nov 26 11:46:10 compute-0 systemd[150228]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Nov 26 11:46:10 compute-0 edpm-start-podman-container[150186]: Creating additional drop-in dependency for "ovn_controller" (cfbeb1268b73bea59211ec8c025ee3818e11127880f6c0675b5fbea39bdd577e)
Nov 26 11:46:10 compute-0 podman[150206]: 2025-11-26 11:46:10.987520446 +0000 UTC m=+0.060224144 container health_status cfbeb1268b73bea59211ec8c025ee3818e11127880f6c0675b5fbea39bdd577e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 26 11:46:10 compute-0 systemd[1]: Reloading.
Nov 26 11:46:11 compute-0 systemd[150228]: Queued start job for default target Main User Target.
Nov 26 11:46:11 compute-0 systemd[150228]: Created slice User Application Slice.
Nov 26 11:46:11 compute-0 systemd[150228]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Nov 26 11:46:11 compute-0 systemd[150228]: Started Daily Cleanup of User's Temporary Directories.
Nov 26 11:46:11 compute-0 systemd[150228]: Reached target Paths.
Nov 26 11:46:11 compute-0 systemd[150228]: Reached target Timers.
Nov 26 11:46:11 compute-0 systemd[150228]: Starting D-Bus User Message Bus Socket...
Nov 26 11:46:11 compute-0 systemd[150228]: Starting Create User's Volatile Files and Directories...
Nov 26 11:46:11 compute-0 systemd-rc-local-generator[150275]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:46:11 compute-0 systemd-sysv-generator[150278]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:46:11 compute-0 systemd[150228]: Finished Create User's Volatile Files and Directories.
Nov 26 11:46:11 compute-0 systemd[150228]: Listening on D-Bus User Message Bus Socket.
Nov 26 11:46:11 compute-0 systemd[150228]: Reached target Sockets.
Nov 26 11:46:11 compute-0 systemd[150228]: Reached target Basic System.
Nov 26 11:46:11 compute-0 systemd[150228]: Reached target Main User Target.
Nov 26 11:46:11 compute-0 systemd[150228]: Startup finished in 117ms.
Nov 26 11:46:11 compute-0 systemd[1]: Started User Manager for UID 0.
Nov 26 11:46:11 compute-0 systemd[1]: Started ovn_controller container.
Nov 26 11:46:11 compute-0 systemd[1]: cfbeb1268b73bea59211ec8c025ee3818e11127880f6c0675b5fbea39bdd577e-b78b6ac77429198.service: Main process exited, code=exited, status=1/FAILURE
Nov 26 11:46:11 compute-0 systemd[1]: cfbeb1268b73bea59211ec8c025ee3818e11127880f6c0675b5fbea39bdd577e-b78b6ac77429198.service: Failed with result 'exit-code'.
Nov 26 11:46:11 compute-0 systemd[1]: Started Session c1 of User root.
Nov 26 11:46:11 compute-0 sudo[150144]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:11 compute-0 ovn_controller[150199]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 26 11:46:11 compute-0 ovn_controller[150199]: INFO:__main__:Validating config file
Nov 26 11:46:11 compute-0 ovn_controller[150199]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 26 11:46:11 compute-0 ovn_controller[150199]: INFO:__main__:Writing out command to execute
Nov 26 11:46:11 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Nov 26 11:46:11 compute-0 ovn_controller[150199]: ++ cat /run_command
Nov 26 11:46:11 compute-0 ovn_controller[150199]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 26 11:46:11 compute-0 ovn_controller[150199]: + ARGS=
Nov 26 11:46:11 compute-0 ovn_controller[150199]: + sudo kolla_copy_cacerts
Nov 26 11:46:11 compute-0 systemd[1]: Started Session c2 of User root.
Nov 26 11:46:11 compute-0 ovn_controller[150199]: + [[ ! -n '' ]]
Nov 26 11:46:11 compute-0 ovn_controller[150199]: + . kolla_extend_start
Nov 26 11:46:11 compute-0 ovn_controller[150199]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 26 11:46:11 compute-0 ovn_controller[150199]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Nov 26 11:46:11 compute-0 ovn_controller[150199]: + umask 0022
Nov 26 11:46:11 compute-0 ovn_controller[150199]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Nov 26 11:46:11 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Nov 26 11:46:11 compute-0 ovn_controller[150199]: 2025-11-26T11:46:11Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 26 11:46:11 compute-0 ovn_controller[150199]: 2025-11-26T11:46:11Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 26 11:46:11 compute-0 ovn_controller[150199]: 2025-11-26T11:46:11Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Nov 26 11:46:11 compute-0 ovn_controller[150199]: 2025-11-26T11:46:11Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Nov 26 11:46:11 compute-0 ovn_controller[150199]: 2025-11-26T11:46:11Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 26 11:46:11 compute-0 ovn_controller[150199]: 2025-11-26T11:46:11Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Nov 26 11:46:11 compute-0 NetworkManager[48976]: <info>  [1764157571.3321] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Nov 26 11:46:11 compute-0 NetworkManager[48976]: <info>  [1764157571.3324] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 26 11:46:11 compute-0 NetworkManager[48976]: <info>  [1764157571.3330] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Nov 26 11:46:11 compute-0 NetworkManager[48976]: <info>  [1764157571.3333] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Nov 26 11:46:11 compute-0 NetworkManager[48976]: <info>  [1764157571.3335] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 26 11:46:11 compute-0 ovn_controller[150199]: 2025-11-26T11:46:11Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 26 11:46:11 compute-0 kernel: br-int: entered promiscuous mode
Nov 26 11:46:11 compute-0 ovn_controller[150199]: 2025-11-26T11:46:11Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 26 11:46:11 compute-0 ovn_controller[150199]: 2025-11-26T11:46:11Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 26 11:46:11 compute-0 ovn_controller[150199]: 2025-11-26T11:46:11Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Nov 26 11:46:11 compute-0 ovn_controller[150199]: 2025-11-26T11:46:11Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Nov 26 11:46:11 compute-0 ovn_controller[150199]: 2025-11-26T11:46:11Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Nov 26 11:46:11 compute-0 ovn_controller[150199]: 2025-11-26T11:46:11Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 26 11:46:11 compute-0 ovn_controller[150199]: 2025-11-26T11:46:11Z|00014|main|INFO|OVS feature set changed, force recompute.
Nov 26 11:46:11 compute-0 ovn_controller[150199]: 2025-11-26T11:46:11Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 26 11:46:11 compute-0 ovn_controller[150199]: 2025-11-26T11:46:11Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 26 11:46:11 compute-0 ovn_controller[150199]: 2025-11-26T11:46:11Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 26 11:46:11 compute-0 ovn_controller[150199]: 2025-11-26T11:46:11Z|00018|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 26 11:46:11 compute-0 ovn_controller[150199]: 2025-11-26T11:46:11Z|00019|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 26 11:46:11 compute-0 ovn_controller[150199]: 2025-11-26T11:46:11Z|00020|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Nov 26 11:46:11 compute-0 ovn_controller[150199]: 2025-11-26T11:46:11Z|00021|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Nov 26 11:46:11 compute-0 ovn_controller[150199]: 2025-11-26T11:46:11Z|00022|main|INFO|OVS feature set changed, force recompute.
Nov 26 11:46:11 compute-0 ovn_controller[150199]: 2025-11-26T11:46:11Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Nov 26 11:46:11 compute-0 ovn_controller[150199]: 2025-11-26T11:46:11Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Nov 26 11:46:11 compute-0 ovn_controller[150199]: 2025-11-26T11:46:11Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 26 11:46:11 compute-0 ovn_controller[150199]: 2025-11-26T11:46:11Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 26 11:46:11 compute-0 ovn_controller[150199]: 2025-11-26T11:46:11Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 26 11:46:11 compute-0 ovn_controller[150199]: 2025-11-26T11:46:11Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 26 11:46:11 compute-0 ovn_controller[150199]: 2025-11-26T11:46:11Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 26 11:46:11 compute-0 ovn_controller[150199]: 2025-11-26T11:46:11Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 26 11:46:11 compute-0 NetworkManager[48976]: <info>  [1764157571.3473] manager: (ovn-9ccbaf-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Nov 26 11:46:11 compute-0 systemd-udevd[150342]: Network interface NamePolicy= disabled on kernel command line.
Nov 26 11:46:11 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Nov 26 11:46:11 compute-0 systemd-udevd[150343]: Network interface NamePolicy= disabled on kernel command line.
Nov 26 11:46:11 compute-0 NetworkManager[48976]: <info>  [1764157571.3618] device (genev_sys_6081): carrier: link connected
Nov 26 11:46:11 compute-0 NetworkManager[48976]: <info>  [1764157571.3620] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Nov 26 11:46:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:46:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:46:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:46:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:46:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:46:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:46:11 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v329: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:46:11 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:46:11 compute-0 sudo[150459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fatejlmrbkfvkcltalmdmwdlllmgxgmw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157571.37986-609-262263143531568/AnsiballZ_command.py'
Nov 26 11:46:11 compute-0 sudo[150459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:46:11 compute-0 python3.9[150461]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:46:11 compute-0 ovs-vsctl[150462]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Nov 26 11:46:11 compute-0 sudo[150459]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:12 compute-0 sudo[150612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewizrlxazhlmjsgfdovyccqiavqwtuzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157571.8538864-617-221233703539694/AnsiballZ_command.py'
Nov 26 11:46:12 compute-0 sudo[150612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:46:12 compute-0 python3.9[150614]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:46:12 compute-0 ovs-vsctl[150616]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Nov 26 11:46:12 compute-0 sudo[150612]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:12 compute-0 ceph-mon[74928]: pgmap v329: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:46:12 compute-0 sudo[150767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spogmpidvjhotjfxanvsfisnijtgfrzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157572.4740283-631-121075735705727/AnsiballZ_command.py'
Nov 26 11:46:12 compute-0 sudo[150767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:46:12 compute-0 python3.9[150769]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:46:12 compute-0 ovs-vsctl[150770]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Nov 26 11:46:12 compute-0 sudo[150767]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:13 compute-0 sshd-session[138817]: Connection closed by 192.168.122.30 port 46602
Nov 26 11:46:13 compute-0 sshd-session[138814]: pam_unix(sshd:session): session closed for user zuul
Nov 26 11:46:13 compute-0 systemd[1]: session-45.scope: Deactivated successfully.
Nov 26 11:46:13 compute-0 systemd[1]: session-45.scope: Consumed 40.072s CPU time.
Nov 26 11:46:13 compute-0 systemd-logind[744]: Session 45 logged out. Waiting for processes to exit.
Nov 26 11:46:13 compute-0 systemd-logind[744]: Removed session 45.
Nov 26 11:46:13 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v330: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:46:14 compute-0 ceph-mon[74928]: pgmap v330: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:46:15 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v331: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:46:16 compute-0 ceph-mon[74928]: pgmap v331: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:46:16 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:46:17 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v332: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:46:18 compute-0 ceph-mon[74928]: pgmap v332: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:46:18 compute-0 sshd-session[150795]: Accepted publickey for zuul from 192.168.122.30 port 34390 ssh2: ECDSA SHA256:Ri1FttUDYah1mI7cQP4QF8tE4Jqe/KzhJvhCRDggR5A
Nov 26 11:46:18 compute-0 systemd-logind[744]: New session 47 of user zuul.
Nov 26 11:46:18 compute-0 systemd[1]: Started Session 47 of User zuul.
Nov 26 11:46:19 compute-0 sshd-session[150795]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 26 11:46:19 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v333: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:46:19 compute-0 python3.9[150948]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 11:46:20 compute-0 sudo[151102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikqbuexwcqkxznmalgxxfqrlsimdlauz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157580.1244972-34-195373060683741/AnsiballZ_file.py'
Nov 26 11:46:20 compute-0 sudo[151102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:46:20 compute-0 ceph-mon[74928]: pgmap v333: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:46:20 compute-0 python3.9[151104]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:46:20 compute-0 sudo[151102]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:20 compute-0 sudo[151254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymvtoqvdcakvwstsancsdxwffipxqadr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157580.6768613-34-86216119318876/AnsiballZ_file.py'
Nov 26 11:46:20 compute-0 sudo[151254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:46:20 compute-0 python3.9[151256]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:46:21 compute-0 sudo[151254]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:21 compute-0 sudo[151406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apoydspkljiyftutrijibbgmvlypvvdl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157581.0963664-34-79566638834282/AnsiballZ_file.py'
Nov 26 11:46:21 compute-0 sudo[151406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:46:21 compute-0 python3.9[151408]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:46:21 compute-0 sudo[151406]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:21 compute-0 systemd[1]: Stopping User Manager for UID 0...
Nov 26 11:46:21 compute-0 systemd[150228]: Activating special unit Exit the Session...
Nov 26 11:46:21 compute-0 systemd[150228]: Stopped target Main User Target.
Nov 26 11:46:21 compute-0 systemd[150228]: Stopped target Basic System.
Nov 26 11:46:21 compute-0 systemd[150228]: Stopped target Paths.
Nov 26 11:46:21 compute-0 systemd[150228]: Stopped target Sockets.
Nov 26 11:46:21 compute-0 systemd[150228]: Stopped target Timers.
Nov 26 11:46:21 compute-0 systemd[150228]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 26 11:46:21 compute-0 systemd[150228]: Closed D-Bus User Message Bus Socket.
Nov 26 11:46:21 compute-0 systemd[150228]: Stopped Create User's Volatile Files and Directories.
Nov 26 11:46:21 compute-0 systemd[150228]: Removed slice User Application Slice.
Nov 26 11:46:21 compute-0 systemd[150228]: Reached target Shutdown.
Nov 26 11:46:21 compute-0 systemd[150228]: Finished Exit the Session.
Nov 26 11:46:21 compute-0 systemd[150228]: Reached target Exit the Session.
Nov 26 11:46:21 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Nov 26 11:46:21 compute-0 systemd[1]: Stopped User Manager for UID 0.
Nov 26 11:46:21 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Nov 26 11:46:21 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Nov 26 11:46:21 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Nov 26 11:46:21 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Nov 26 11:46:21 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Nov 26 11:46:21 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v334: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:46:21 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:46:21 compute-0 sudo[151559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axwecyiiftzjrhfmsidyhcnazsftmejn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157581.5322704-34-159434422627762/AnsiballZ_file.py'
Nov 26 11:46:21 compute-0 sudo[151559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:46:21 compute-0 python3.9[151561]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:46:21 compute-0 sudo[151559]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:22 compute-0 sudo[151711]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubfyauzhugpqjoiuwvkqluwrasivwcvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157581.9548917-34-226396094123433/AnsiballZ_file.py'
Nov 26 11:46:22 compute-0 sudo[151711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:46:22 compute-0 python3.9[151713]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:46:22 compute-0 sudo[151711]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:22 compute-0 ceph-mon[74928]: pgmap v334: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:46:22 compute-0 python3.9[151863]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 11:46:23 compute-0 sudo[152013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyinpfkiujzerrkpscsaipwxpypxltnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157582.9438117-78-198269108194917/AnsiballZ_seboolean.py'
Nov 26 11:46:23 compute-0 sudo[152013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:46:23 compute-0 python3.9[152015]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 26 11:46:23 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v335: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:46:23 compute-0 sudo[152013]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:24 compute-0 python3.9[152165]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:46:24 compute-0 ceph-mon[74928]: pgmap v335: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:46:24 compute-0 python3.9[152286]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764157584.0398126-86-279173162508118/.source follow=False _original_basename=haproxy.j2 checksum=deae64da24ad28f71dc47276f2e9f268f19a4519 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:46:25 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v336: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:46:25 compute-0 python3.9[152436]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:46:26 compute-0 python3.9[152557]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764157585.1194034-101-83506918540005/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:46:26 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:46:26 compute-0 ceph-mon[74928]: pgmap v336: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:46:26 compute-0 sudo[152707]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdxbbdhhfhamdahubgorifkfbcejaing ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157586.4089937-118-178170352400662/AnsiballZ_setup.py'
Nov 26 11:46:26 compute-0 sudo[152707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:46:26 compute-0 python3.9[152709]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 11:46:27 compute-0 sudo[152707]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:27 compute-0 sudo[152791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-leorkvqrilidsarwccrkyltbvwdoliyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157586.4089937-118-178170352400662/AnsiballZ_dnf.py'
Nov 26 11:46:27 compute-0 sudo[152791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:46:27 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v337: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:46:27 compute-0 python3.9[152793]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 11:46:28 compute-0 sudo[152791]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:28 compute-0 ceph-mon[74928]: pgmap v337: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:46:29 compute-0 sudo[152944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfxfbtztsgddjakdgqcfntngygzavdtr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157588.623549-130-106582699832759/AnsiballZ_systemd.py'
Nov 26 11:46:29 compute-0 sudo[152944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:46:29 compute-0 python3.9[152946]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 26 11:46:29 compute-0 sudo[152944]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:29 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v338: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:46:29 compute-0 python3.9[153100]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:46:30 compute-0 python3.9[153221]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764157589.4689744-138-18899347427373/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:46:30 compute-0 ceph-mon[74928]: pgmap v338: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:46:30 compute-0 python3.9[153371]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:46:31 compute-0 python3.9[153492]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764157590.2462373-138-15570983721706/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:46:31 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v339: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:46:31 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:46:31 compute-0 python3.9[153642]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:46:32 compute-0 python3.9[153763]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764157591.631087-182-124669346611206/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:46:32 compute-0 ceph-mon[74928]: pgmap v339: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:46:32 compute-0 python3.9[153913]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:46:33 compute-0 python3.9[154034]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764157592.4140096-182-79809987784060/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:46:33 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v340: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:46:33 compute-0 python3.9[154184]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 11:46:33 compute-0 sudo[154336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozptvmiioycaeilswhtwuppushmlldpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157593.7112498-220-281392961285626/AnsiballZ_file.py'
Nov 26 11:46:33 compute-0 sudo[154336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:46:34 compute-0 python3.9[154338]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:46:34 compute-0 sudo[154336]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:34 compute-0 sudo[154488]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erbdmympblnpfginpuqtfsjbfjzrptqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157594.174268-228-239466851690188/AnsiballZ_stat.py'
Nov 26 11:46:34 compute-0 sudo[154488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:46:34 compute-0 python3.9[154490]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:46:34 compute-0 sudo[154488]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:34 compute-0 ceph-mon[74928]: pgmap v340: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:46:34 compute-0 sudo[154566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyftptqkijkhrzhtkvefqrfcawbvulfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157594.174268-228-239466851690188/AnsiballZ_file.py'
Nov 26 11:46:34 compute-0 sudo[154566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:46:34 compute-0 python3.9[154568]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:46:34 compute-0 sudo[154566]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:35 compute-0 sudo[154718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzsbdbvedoepdpqjfvgbktjioxdfsyva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157594.9871087-228-16463803027474/AnsiballZ_stat.py'
Nov 26 11:46:35 compute-0 sudo[154718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:46:35 compute-0 python3.9[154720]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:46:35 compute-0 sudo[154718]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:35 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v341: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:46:35 compute-0 sudo[154796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tngqeysqaznztfrpyxdvbdtjamppiwef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157594.9871087-228-16463803027474/AnsiballZ_file.py'
Nov 26 11:46:35 compute-0 sudo[154796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:46:35 compute-0 python3.9[154798]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:46:35 compute-0 sudo[154796]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:36 compute-0 sudo[154948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyydabozffugvdcxphfrxeywyoiljzxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157595.864226-251-169787523267340/AnsiballZ_file.py'
Nov 26 11:46:36 compute-0 sudo[154948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:46:36 compute-0 python3.9[154950]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:46:36 compute-0 sudo[154948]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:36 compute-0 sudo[155100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmojbgmammqqnlncswgjcplfxvvlwmrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157596.3246794-259-275943858425421/AnsiballZ_stat.py'
Nov 26 11:46:36 compute-0 sudo[155100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:46:36 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:46:36 compute-0 ceph-mon[74928]: pgmap v341: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:46:36 compute-0 python3.9[155102]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:46:36 compute-0 sudo[155100]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:36 compute-0 sudo[155178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klhufchzriqksqqxhmafrdeocwrkjduj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157596.3246794-259-275943858425421/AnsiballZ_file.py'
Nov 26 11:46:36 compute-0 sudo[155178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:46:36 compute-0 python3.9[155180]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:46:37 compute-0 sudo[155178]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:37 compute-0 sudo[155330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsdgdvfjldbdauwxsvkkxsyomrgqdgwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157597.1199427-271-195557465591555/AnsiballZ_stat.py'
Nov 26 11:46:37 compute-0 sudo[155330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:46:37 compute-0 python3.9[155332]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:46:37 compute-0 sudo[155330]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:37 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v342: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:46:37 compute-0 sudo[155408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqgfwjsmkvyzcsbyyruboraayzimpggi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157597.1199427-271-195557465591555/AnsiballZ_file.py'
Nov 26 11:46:37 compute-0 sudo[155408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:46:37 compute-0 python3.9[155410]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:46:37 compute-0 sudo[155408]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:38 compute-0 sudo[155560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvpvejmnlwhynzqpspgwgtmmkfjckrum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157597.8973916-283-12966839536934/AnsiballZ_systemd.py'
Nov 26 11:46:38 compute-0 sudo[155560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:46:38 compute-0 python3.9[155562]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 11:46:38 compute-0 systemd[1]: Reloading.
Nov 26 11:46:38 compute-0 systemd-rc-local-generator[155582]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:46:38 compute-0 systemd-sysv-generator[155589]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:46:38 compute-0 sudo[155560]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:38 compute-0 ceph-mon[74928]: pgmap v342: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:46:38 compute-0 sudo[155749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlydvfkxjkoaijquiemqteavwtvbnmep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157598.7005646-291-192645167762261/AnsiballZ_stat.py'
Nov 26 11:46:38 compute-0 sudo[155749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:46:39 compute-0 python3.9[155751]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:46:39 compute-0 sudo[155749]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:39 compute-0 sudo[155827]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmdcjucynxjrknrjsisedciktqlrsekw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157598.7005646-291-192645167762261/AnsiballZ_file.py'
Nov 26 11:46:39 compute-0 sudo[155827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:46:39 compute-0 python3.9[155829]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:46:39 compute-0 sudo[155827]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:39 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v343: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:46:39 compute-0 sudo[155979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fejniapespbhvtgndbpdmlmhnkvbarfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157599.4929318-303-19731876521599/AnsiballZ_stat.py'
Nov 26 11:46:39 compute-0 sudo[155979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:46:39 compute-0 python3.9[155981]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:46:39 compute-0 sudo[155979]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:40 compute-0 sudo[156057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncsaijxnifxkrgtspzvxtmbguboeheif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157599.4929318-303-19731876521599/AnsiballZ_file.py'
Nov 26 11:46:40 compute-0 sudo[156057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:46:40 compute-0 python3.9[156059]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:46:40 compute-0 sudo[156057]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:40 compute-0 ceph-mon[74928]: pgmap v343: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:46:40 compute-0 sudo[156209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rihahyvzfrnsqlnywiayvjowdhjktckg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157600.4290466-315-127828680694912/AnsiballZ_systemd.py'
Nov 26 11:46:40 compute-0 sudo[156209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:46:40 compute-0 python3.9[156211]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 11:46:40 compute-0 systemd[1]: Reloading.
Nov 26 11:46:40 compute-0 systemd-sysv-generator[156235]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:46:40 compute-0 systemd-rc-local-generator[156232]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:46:41 compute-0 systemd[1]: Starting Create netns directory...
Nov 26 11:46:41 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 26 11:46:41 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 26 11:46:41 compute-0 systemd[1]: Finished Create netns directory.
Nov 26 11:46:41 compute-0 sudo[156209]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:41 compute-0 ceph-mgr[75197]: [balancer INFO root] Optimize plan auto_2025-11-26_11:46:41
Nov 26 11:46:41 compute-0 ceph-mgr[75197]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 11:46:41 compute-0 ceph-mgr[75197]: [balancer INFO root] do_upmap
Nov 26 11:46:41 compute-0 ceph-mgr[75197]: [balancer INFO root] pools ['vms', '.mgr', 'volumes', 'cephfs.cephfs.data', 'default.rgw.control', 'images', 'default.rgw.log', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.meta', 'backups']
Nov 26 11:46:41 compute-0 ceph-mgr[75197]: [balancer INFO root] prepared 0/10 changes
Nov 26 11:46:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:46:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:46:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:46:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:46:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:46:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:46:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 11:46:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 11:46:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 11:46:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 11:46:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 11:46:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 11:46:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 11:46:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 11:46:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 11:46:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 11:46:41 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v344: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:46:41 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:46:41 compute-0 ovn_controller[150199]: 2025-11-26T11:46:41Z|00025|memory|INFO|16256 kB peak resident set size after 30.3 seconds
Nov 26 11:46:41 compute-0 ovn_controller[150199]: 2025-11-26T11:46:41Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Nov 26 11:46:41 compute-0 sudo[156389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:46:41 compute-0 sudo[156442]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzlfqnilykhplbrtqnshmqkmnqorlgiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157601.3830378-325-144178962193953/AnsiballZ_file.py'
Nov 26 11:46:41 compute-0 sudo[156389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:46:41 compute-0 sudo[156442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:46:41 compute-0 sudo[156389]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:41 compute-0 podman[156376]: 2025-11-26 11:46:41.63756788 +0000 UTC m=+0.062855326 container health_status cfbeb1268b73bea59211ec8c025ee3818e11127880f6c0675b5fbea39bdd577e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 26 11:46:41 compute-0 sudo[156453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:46:41 compute-0 sudo[156453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:46:41 compute-0 sudo[156453]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:41 compute-0 sudo[156479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:46:41 compute-0 sudo[156479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:46:41 compute-0 sudo[156479]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:41 compute-0 sudo[156504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 26 11:46:41 compute-0 sudo[156504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:46:41 compute-0 python3.9[156450]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:46:41 compute-0 sudo[156442]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:42 compute-0 sudo[156504]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:42 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 11:46:42 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:46:42 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 11:46:42 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 11:46:42 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 11:46:42 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:46:42 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 1f5adb2b-0723-41b2-9330-1fa18ffcd888 does not exist
Nov 26 11:46:42 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 0fa392bc-3059-4cd9-8886-5a1697cdfe18 does not exist
Nov 26 11:46:42 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev f59d6df5-2f11-429b-b027-b139aaa9aec7 does not exist
Nov 26 11:46:42 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 11:46:42 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 11:46:42 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 11:46:42 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 11:46:42 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 11:46:42 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:46:42 compute-0 sudo[156707]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmqreqovuzlrijabtgddqtkdrvuubjvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157601.9282389-333-6758832107608/AnsiballZ_stat.py'
Nov 26 11:46:42 compute-0 sudo[156707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:46:42 compute-0 sudo[156708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:46:42 compute-0 sudo[156708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:46:42 compute-0 sudo[156708]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:42 compute-0 sudo[156735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:46:42 compute-0 sudo[156735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:46:42 compute-0 sudo[156735]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:42 compute-0 sudo[156760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:46:42 compute-0 sudo[156760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:46:42 compute-0 sudo[156760]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:42 compute-0 sudo[156785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 26 11:46:42 compute-0 sudo[156785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:46:42 compute-0 python3.9[156717]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:46:42 compute-0 sudo[156707]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:42 compute-0 podman[156928]: 2025-11-26 11:46:42.505329169 +0000 UTC m=+0.029710091 container create d0fbba8be2435bcd4e147e1fabf80da75835f69a409ae99a5966620d9a3b3564 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:46:42 compute-0 systemd[1]: Started libpod-conmon-d0fbba8be2435bcd4e147e1fabf80da75835f69a409ae99a5966620d9a3b3564.scope.
Nov 26 11:46:42 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:46:42 compute-0 sudo[156975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfizdvyqcuirtxqxzuqxesgsmbcczdfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157601.9282389-333-6758832107608/AnsiballZ_copy.py'
Nov 26 11:46:42 compute-0 sudo[156975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:46:42 compute-0 podman[156928]: 2025-11-26 11:46:42.556574617 +0000 UTC m=+0.080955558 container init d0fbba8be2435bcd4e147e1fabf80da75835f69a409ae99a5966620d9a3b3564 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_engelbart, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:46:42 compute-0 podman[156928]: 2025-11-26 11:46:42.562004296 +0000 UTC m=+0.086385217 container start d0fbba8be2435bcd4e147e1fabf80da75835f69a409ae99a5966620d9a3b3564 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_engelbart, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 26 11:46:42 compute-0 podman[156928]: 2025-11-26 11:46:42.563691804 +0000 UTC m=+0.088072746 container attach d0fbba8be2435bcd4e147e1fabf80da75835f69a409ae99a5966620d9a3b3564 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_engelbart, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 26 11:46:42 compute-0 compassionate_engelbart[156976]: 167 167
Nov 26 11:46:42 compute-0 systemd[1]: libpod-d0fbba8be2435bcd4e147e1fabf80da75835f69a409ae99a5966620d9a3b3564.scope: Deactivated successfully.
Nov 26 11:46:42 compute-0 conmon[156976]: conmon d0fbba8be2435bcd4e14 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d0fbba8be2435bcd4e147e1fabf80da75835f69a409ae99a5966620d9a3b3564.scope/container/memory.events
Nov 26 11:46:42 compute-0 podman[156928]: 2025-11-26 11:46:42.566569706 +0000 UTC m=+0.090950637 container died d0fbba8be2435bcd4e147e1fabf80da75835f69a409ae99a5966620d9a3b3564 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 26 11:46:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f8faccb7cb02d2d5b0aab53159dfbbd9601b173985e2d6eb3fa21a82f1bd374-merged.mount: Deactivated successfully.
Nov 26 11:46:42 compute-0 podman[156928]: 2025-11-26 11:46:42.588821945 +0000 UTC m=+0.113202866 container remove d0fbba8be2435bcd4e147e1fabf80da75835f69a409ae99a5966620d9a3b3564 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Nov 26 11:46:42 compute-0 podman[156928]: 2025-11-26 11:46:42.494887849 +0000 UTC m=+0.019268790 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:46:42 compute-0 ceph-mon[74928]: pgmap v344: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:46:42 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:46:42 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 11:46:42 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:46:42 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 11:46:42 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 11:46:42 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:46:42 compute-0 systemd[1]: libpod-conmon-d0fbba8be2435bcd4e147e1fabf80da75835f69a409ae99a5966620d9a3b3564.scope: Deactivated successfully.
Nov 26 11:46:42 compute-0 podman[157000]: 2025-11-26 11:46:42.709309144 +0000 UTC m=+0.028027240 container create 205219bb70a88f223de0b70c72058cc628a702f8fad5a9df735b48440b713b76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 26 11:46:42 compute-0 python3.9[156980]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764157601.9282389-333-6758832107608/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:46:42 compute-0 systemd[1]: Started libpod-conmon-205219bb70a88f223de0b70c72058cc628a702f8fad5a9df735b48440b713b76.scope.
Nov 26 11:46:42 compute-0 sudo[156975]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:42 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:46:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/887c7f264f2a7a23035721a055e3c459b0cc626c571fb1d398a1724c3aed0f09/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:46:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/887c7f264f2a7a23035721a055e3c459b0cc626c571fb1d398a1724c3aed0f09/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:46:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/887c7f264f2a7a23035721a055e3c459b0cc626c571fb1d398a1724c3aed0f09/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:46:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/887c7f264f2a7a23035721a055e3c459b0cc626c571fb1d398a1724c3aed0f09/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:46:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/887c7f264f2a7a23035721a055e3c459b0cc626c571fb1d398a1724c3aed0f09/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 11:46:42 compute-0 podman[157000]: 2025-11-26 11:46:42.775710824 +0000 UTC m=+0.094428929 container init 205219bb70a88f223de0b70c72058cc628a702f8fad5a9df735b48440b713b76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_allen, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:46:42 compute-0 podman[157000]: 2025-11-26 11:46:42.781952503 +0000 UTC m=+0.100670598 container start 205219bb70a88f223de0b70c72058cc628a702f8fad5a9df735b48440b713b76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_allen, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 26 11:46:42 compute-0 podman[157000]: 2025-11-26 11:46:42.783012088 +0000 UTC m=+0.101730184 container attach 205219bb70a88f223de0b70c72058cc628a702f8fad5a9df735b48440b713b76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_allen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:46:42 compute-0 podman[157000]: 2025-11-26 11:46:42.697927874 +0000 UTC m=+0.016645989 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:46:43 compute-0 sudo[157166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljxikfexhpuhqchpoeqlialmjhjxnmnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157603.0204005-350-237600635732222/AnsiballZ_file.py'
Nov 26 11:46:43 compute-0 sudo[157166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:46:43 compute-0 python3.9[157168]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:46:43 compute-0 sudo[157166]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:43 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v345: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:46:43 compute-0 trusting_allen[157012]: --> passed data devices: 0 physical, 3 LVM
Nov 26 11:46:43 compute-0 trusting_allen[157012]: --> relative data size: 1.0
Nov 26 11:46:43 compute-0 trusting_allen[157012]: --> All data devices are unavailable
Nov 26 11:46:43 compute-0 podman[157000]: 2025-11-26 11:46:43.600510675 +0000 UTC m=+0.919228770 container died 205219bb70a88f223de0b70c72058cc628a702f8fad5a9df735b48440b713b76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_allen, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 26 11:46:43 compute-0 systemd[1]: libpod-205219bb70a88f223de0b70c72058cc628a702f8fad5a9df735b48440b713b76.scope: Deactivated successfully.
Nov 26 11:46:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-887c7f264f2a7a23035721a055e3c459b0cc626c571fb1d398a1724c3aed0f09-merged.mount: Deactivated successfully.
Nov 26 11:46:43 compute-0 podman[157000]: 2025-11-26 11:46:43.634298196 +0000 UTC m=+0.953016291 container remove 205219bb70a88f223de0b70c72058cc628a702f8fad5a9df735b48440b713b76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_allen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 26 11:46:43 compute-0 systemd[1]: libpod-conmon-205219bb70a88f223de0b70c72058cc628a702f8fad5a9df735b48440b713b76.scope: Deactivated successfully.
Nov 26 11:46:43 compute-0 sudo[156785]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:43 compute-0 sudo[157326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:46:43 compute-0 sudo[157326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:46:43 compute-0 sudo[157326]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:43 compute-0 sudo[157377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-engzjmpimgamwsdroirptsymowiekfff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157603.5154567-358-204547325637629/AnsiballZ_stat.py'
Nov 26 11:46:43 compute-0 sudo[157377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:46:43 compute-0 sudo[157378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:46:43 compute-0 sudo[157378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:46:43 compute-0 sudo[157378]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:43 compute-0 sudo[157405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:46:43 compute-0 sudo[157405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:46:43 compute-0 sudo[157405]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:43 compute-0 sudo[157430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -- lvm list --format json
Nov 26 11:46:43 compute-0 sudo[157430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:46:43 compute-0 python3.9[157388]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:46:43 compute-0 sudo[157377]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:44 compute-0 podman[157532]: 2025-11-26 11:46:44.054896866 +0000 UTC m=+0.029389396 container create f4f96c8dfb5e7cb413d5fd793cd25b6be7a3b5f468d4629362a460f66d391c24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_murdock, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 26 11:46:44 compute-0 systemd[1]: Started libpod-conmon-f4f96c8dfb5e7cb413d5fd793cd25b6be7a3b5f468d4629362a460f66d391c24.scope.
Nov 26 11:46:44 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:46:44 compute-0 podman[157532]: 2025-11-26 11:46:44.099754461 +0000 UTC m=+0.074246981 container init f4f96c8dfb5e7cb413d5fd793cd25b6be7a3b5f468d4629362a460f66d391c24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:46:44 compute-0 podman[157532]: 2025-11-26 11:46:44.104400202 +0000 UTC m=+0.078892722 container start f4f96c8dfb5e7cb413d5fd793cd25b6be7a3b5f468d4629362a460f66d391c24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_murdock, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:46:44 compute-0 reverent_murdock[157568]: 167 167
Nov 26 11:46:44 compute-0 systemd[1]: libpod-f4f96c8dfb5e7cb413d5fd793cd25b6be7a3b5f468d4629362a460f66d391c24.scope: Deactivated successfully.
Nov 26 11:46:44 compute-0 podman[157532]: 2025-11-26 11:46:44.107720607 +0000 UTC m=+0.082213128 container attach f4f96c8dfb5e7cb413d5fd793cd25b6be7a3b5f468d4629362a460f66d391c24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_murdock, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Nov 26 11:46:44 compute-0 podman[157532]: 2025-11-26 11:46:44.108442658 +0000 UTC m=+0.082935188 container died f4f96c8dfb5e7cb413d5fd793cd25b6be7a3b5f468d4629362a460f66d391c24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_murdock, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:46:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f2cd54f29bde146d8de702285432912f3fa569e10993e28966598b482d304e4-merged.mount: Deactivated successfully.
Nov 26 11:46:44 compute-0 podman[157532]: 2025-11-26 11:46:44.128238119 +0000 UTC m=+0.102730639 container remove f4f96c8dfb5e7cb413d5fd793cd25b6be7a3b5f468d4629362a460f66d391c24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_murdock, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 26 11:46:44 compute-0 podman[157532]: 2025-11-26 11:46:44.042322227 +0000 UTC m=+0.016814768 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:46:44 compute-0 systemd[1]: libpod-conmon-f4f96c8dfb5e7cb413d5fd793cd25b6be7a3b5f468d4629362a460f66d391c24.scope: Deactivated successfully.
Nov 26 11:46:44 compute-0 sudo[157635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzhmaxpeowxogjbhgujiousntpbohyld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157603.5154567-358-204547325637629/AnsiballZ_copy.py'
Nov 26 11:46:44 compute-0 sudo[157635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:46:44 compute-0 podman[157643]: 2025-11-26 11:46:44.245252555 +0000 UTC m=+0.026558632 container create ca9190c9661603b4747805b9f4261d4e0d6879423bc3d9eabc8d247bd576ea4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_shamir, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:46:44 compute-0 systemd[1]: Started libpod-conmon-ca9190c9661603b4747805b9f4261d4e0d6879423bc3d9eabc8d247bd576ea4d.scope.
Nov 26 11:46:44 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:46:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f96f4816f1c1aeeebe0ef6ba8d719705212b097537541bd9df980e3624bbb0ff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:46:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f96f4816f1c1aeeebe0ef6ba8d719705212b097537541bd9df980e3624bbb0ff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:46:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f96f4816f1c1aeeebe0ef6ba8d719705212b097537541bd9df980e3624bbb0ff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:46:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f96f4816f1c1aeeebe0ef6ba8d719705212b097537541bd9df980e3624bbb0ff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:46:44 compute-0 podman[157643]: 2025-11-26 11:46:44.300272234 +0000 UTC m=+0.081578321 container init ca9190c9661603b4747805b9f4261d4e0d6879423bc3d9eabc8d247bd576ea4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 26 11:46:44 compute-0 podman[157643]: 2025-11-26 11:46:44.305250393 +0000 UTC m=+0.086556479 container start ca9190c9661603b4747805b9f4261d4e0d6879423bc3d9eabc8d247bd576ea4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_shamir, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:46:44 compute-0 podman[157643]: 2025-11-26 11:46:44.310700539 +0000 UTC m=+0.092006626 container attach ca9190c9661603b4747805b9f4261d4e0d6879423bc3d9eabc8d247bd576ea4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_shamir, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:46:44 compute-0 podman[157643]: 2025-11-26 11:46:44.234622661 +0000 UTC m=+0.015928768 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:46:44 compute-0 python3.9[157642]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764157603.5154567-358-204547325637629/.source.json _original_basename=.36fb_1ki follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:46:44 compute-0 sudo[157635]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:44 compute-0 ceph-mon[74928]: pgmap v345: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:46:44 compute-0 sudo[157810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmcfcrfkcoaebdfhoszfujadosendygm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157604.508765-373-205249483535462/AnsiballZ_file.py'
Nov 26 11:46:44 compute-0 sudo[157810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:46:44 compute-0 python3.9[157812]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:46:44 compute-0 sudo[157810]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:44 compute-0 eager_shamir[157656]: {
Nov 26 11:46:44 compute-0 eager_shamir[157656]:     "0": [
Nov 26 11:46:44 compute-0 eager_shamir[157656]:         {
Nov 26 11:46:44 compute-0 eager_shamir[157656]:             "devices": [
Nov 26 11:46:44 compute-0 eager_shamir[157656]:                 "/dev/loop3"
Nov 26 11:46:44 compute-0 eager_shamir[157656]:             ],
Nov 26 11:46:44 compute-0 eager_shamir[157656]:             "lv_name": "ceph_lv0",
Nov 26 11:46:44 compute-0 eager_shamir[157656]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:46:44 compute-0 eager_shamir[157656]:             "lv_size": "21470642176",
Nov 26 11:46:44 compute-0 eager_shamir[157656]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a9ad59a0-aa2e-4d92-b571-519d2d145b6a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:46:44 compute-0 eager_shamir[157656]:             "lv_uuid": "Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ",
Nov 26 11:46:44 compute-0 eager_shamir[157656]:             "name": "ceph_lv0",
Nov 26 11:46:44 compute-0 eager_shamir[157656]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:46:44 compute-0 eager_shamir[157656]:             "tags": {
Nov 26 11:46:44 compute-0 eager_shamir[157656]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:46:44 compute-0 eager_shamir[157656]:                 "ceph.block_uuid": "Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ",
Nov 26 11:46:44 compute-0 eager_shamir[157656]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:46:44 compute-0 eager_shamir[157656]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:46:44 compute-0 eager_shamir[157656]:                 "ceph.cluster_name": "ceph",
Nov 26 11:46:44 compute-0 eager_shamir[157656]:                 "ceph.crush_device_class": "",
Nov 26 11:46:44 compute-0 eager_shamir[157656]:                 "ceph.encrypted": "0",
Nov 26 11:46:44 compute-0 eager_shamir[157656]:                 "ceph.osd_fsid": "a9ad59a0-aa2e-4d92-b571-519d2d145b6a",
Nov 26 11:46:44 compute-0 eager_shamir[157656]:                 "ceph.osd_id": "0",
Nov 26 11:46:44 compute-0 eager_shamir[157656]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:46:44 compute-0 eager_shamir[157656]:                 "ceph.type": "block",
Nov 26 11:46:44 compute-0 eager_shamir[157656]:                 "ceph.vdo": "0"
Nov 26 11:46:44 compute-0 eager_shamir[157656]:             },
Nov 26 11:46:44 compute-0 eager_shamir[157656]:             "type": "block",
Nov 26 11:46:44 compute-0 eager_shamir[157656]:             "vg_name": "ceph_vg0"
Nov 26 11:46:44 compute-0 eager_shamir[157656]:         }
Nov 26 11:46:44 compute-0 eager_shamir[157656]:     ],
Nov 26 11:46:44 compute-0 eager_shamir[157656]:     "1": [
Nov 26 11:46:44 compute-0 eager_shamir[157656]:         {
Nov 26 11:46:44 compute-0 eager_shamir[157656]:             "devices": [
Nov 26 11:46:44 compute-0 eager_shamir[157656]:                 "/dev/loop4"
Nov 26 11:46:44 compute-0 eager_shamir[157656]:             ],
Nov 26 11:46:44 compute-0 eager_shamir[157656]:             "lv_name": "ceph_lv1",
Nov 26 11:46:44 compute-0 eager_shamir[157656]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:46:44 compute-0 eager_shamir[157656]:             "lv_size": "21470642176",
Nov 26 11:46:44 compute-0 eager_shamir[157656]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2627095b-eef8-4027-bfef-68bf7cb6801f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:46:44 compute-0 eager_shamir[157656]:             "lv_uuid": "T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS",
Nov 26 11:46:44 compute-0 eager_shamir[157656]:             "name": "ceph_lv1",
Nov 26 11:46:44 compute-0 eager_shamir[157656]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:46:44 compute-0 eager_shamir[157656]:             "tags": {
Nov 26 11:46:44 compute-0 eager_shamir[157656]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:46:44 compute-0 eager_shamir[157656]:                 "ceph.block_uuid": "T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS",
Nov 26 11:46:44 compute-0 eager_shamir[157656]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:46:44 compute-0 eager_shamir[157656]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:46:44 compute-0 eager_shamir[157656]:                 "ceph.cluster_name": "ceph",
Nov 26 11:46:44 compute-0 eager_shamir[157656]:                 "ceph.crush_device_class": "",
Nov 26 11:46:44 compute-0 eager_shamir[157656]:                 "ceph.encrypted": "0",
Nov 26 11:46:44 compute-0 eager_shamir[157656]:                 "ceph.osd_fsid": "2627095b-eef8-4027-bfef-68bf7cb6801f",
Nov 26 11:46:44 compute-0 eager_shamir[157656]:                 "ceph.osd_id": "1",
Nov 26 11:46:44 compute-0 eager_shamir[157656]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:46:44 compute-0 eager_shamir[157656]:                 "ceph.type": "block",
Nov 26 11:46:44 compute-0 eager_shamir[157656]:                 "ceph.vdo": "0"
Nov 26 11:46:44 compute-0 eager_shamir[157656]:             },
Nov 26 11:46:44 compute-0 eager_shamir[157656]:             "type": "block",
Nov 26 11:46:44 compute-0 eager_shamir[157656]:             "vg_name": "ceph_vg1"
Nov 26 11:46:44 compute-0 eager_shamir[157656]:         }
Nov 26 11:46:44 compute-0 eager_shamir[157656]:     ],
Nov 26 11:46:44 compute-0 eager_shamir[157656]:     "2": [
Nov 26 11:46:44 compute-0 eager_shamir[157656]:         {
Nov 26 11:46:44 compute-0 eager_shamir[157656]:             "devices": [
Nov 26 11:46:44 compute-0 eager_shamir[157656]:                 "/dev/loop5"
Nov 26 11:46:44 compute-0 eager_shamir[157656]:             ],
Nov 26 11:46:44 compute-0 eager_shamir[157656]:             "lv_name": "ceph_lv2",
Nov 26 11:46:44 compute-0 eager_shamir[157656]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:46:44 compute-0 eager_shamir[157656]:             "lv_size": "21470642176",
Nov 26 11:46:44 compute-0 eager_shamir[157656]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d56156fb-7361-4bef-b06b-1320109b4323,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:46:44 compute-0 eager_shamir[157656]:             "lv_uuid": "PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF",
Nov 26 11:46:44 compute-0 eager_shamir[157656]:             "name": "ceph_lv2",
Nov 26 11:46:44 compute-0 eager_shamir[157656]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:46:44 compute-0 eager_shamir[157656]:             "tags": {
Nov 26 11:46:44 compute-0 eager_shamir[157656]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:46:44 compute-0 eager_shamir[157656]:                 "ceph.block_uuid": "PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF",
Nov 26 11:46:44 compute-0 eager_shamir[157656]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:46:44 compute-0 eager_shamir[157656]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:46:44 compute-0 eager_shamir[157656]:                 "ceph.cluster_name": "ceph",
Nov 26 11:46:44 compute-0 eager_shamir[157656]:                 "ceph.crush_device_class": "",
Nov 26 11:46:44 compute-0 eager_shamir[157656]:                 "ceph.encrypted": "0",
Nov 26 11:46:44 compute-0 eager_shamir[157656]:                 "ceph.osd_fsid": "d56156fb-7361-4bef-b06b-1320109b4323",
Nov 26 11:46:44 compute-0 eager_shamir[157656]:                 "ceph.osd_id": "2",
Nov 26 11:46:44 compute-0 eager_shamir[157656]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:46:44 compute-0 eager_shamir[157656]:                 "ceph.type": "block",
Nov 26 11:46:44 compute-0 eager_shamir[157656]:                 "ceph.vdo": "0"
Nov 26 11:46:44 compute-0 eager_shamir[157656]:             },
Nov 26 11:46:44 compute-0 eager_shamir[157656]:             "type": "block",
Nov 26 11:46:44 compute-0 eager_shamir[157656]:             "vg_name": "ceph_vg2"
Nov 26 11:46:44 compute-0 eager_shamir[157656]:         }
Nov 26 11:46:44 compute-0 eager_shamir[157656]:     ]
Nov 26 11:46:44 compute-0 eager_shamir[157656]: }
Nov 26 11:46:44 compute-0 systemd[1]: libpod-ca9190c9661603b4747805b9f4261d4e0d6879423bc3d9eabc8d247bd576ea4d.scope: Deactivated successfully.
Nov 26 11:46:44 compute-0 podman[157841]: 2025-11-26 11:46:44.969534727 +0000 UTC m=+0.016794729 container died ca9190c9661603b4747805b9f4261d4e0d6879423bc3d9eabc8d247bd576ea4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_shamir, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:46:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-f96f4816f1c1aeeebe0ef6ba8d719705212b097537541bd9df980e3624bbb0ff-merged.mount: Deactivated successfully.
Nov 26 11:46:45 compute-0 podman[157841]: 2025-11-26 11:46:45.000359166 +0000 UTC m=+0.047619148 container remove ca9190c9661603b4747805b9f4261d4e0d6879423bc3d9eabc8d247bd576ea4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 26 11:46:45 compute-0 systemd[1]: libpod-conmon-ca9190c9661603b4747805b9f4261d4e0d6879423bc3d9eabc8d247bd576ea4d.scope: Deactivated successfully.
Nov 26 11:46:45 compute-0 sudo[157430]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:45 compute-0 sudo[157897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:46:45 compute-0 sudo[157897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:46:45 compute-0 sudo[157897]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:45 compute-0 sudo[157935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:46:45 compute-0 sudo[157935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:46:45 compute-0 sudo[157935]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:45 compute-0 sudo[157979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:46:45 compute-0 sudo[157979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:46:45 compute-0 sudo[157979]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:45 compute-0 sudo[158076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adxcinsnmemuebndbaprszgtfbxkoxgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157605.012855-381-34711397815303/AnsiballZ_stat.py'
Nov 26 11:46:45 compute-0 sudo[158031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -- raw list --format json
Nov 26 11:46:45 compute-0 sudo[158076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:46:45 compute-0 sudo[158031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:46:45 compute-0 sudo[158076]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:45 compute-0 podman[158118]: 2025-11-26 11:46:45.434430486 +0000 UTC m=+0.027896404 container create 61e562abcda22676ae205b2ae8f5e0db867b975157a2f101fae32d57ed07bc49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:46:45 compute-0 systemd[1]: Started libpod-conmon-61e562abcda22676ae205b2ae8f5e0db867b975157a2f101fae32d57ed07bc49.scope.
Nov 26 11:46:45 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:46:45 compute-0 podman[158118]: 2025-11-26 11:46:45.484116246 +0000 UTC m=+0.077582183 container init 61e562abcda22676ae205b2ae8f5e0db867b975157a2f101fae32d57ed07bc49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_agnesi, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 26 11:46:45 compute-0 podman[158118]: 2025-11-26 11:46:45.488495276 +0000 UTC m=+0.081961192 container start 61e562abcda22676ae205b2ae8f5e0db867b975157a2f101fae32d57ed07bc49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:46:45 compute-0 podman[158118]: 2025-11-26 11:46:45.489721255 +0000 UTC m=+0.083187193 container attach 61e562abcda22676ae205b2ae8f5e0db867b975157a2f101fae32d57ed07bc49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_agnesi, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 26 11:46:45 compute-0 wizardly_agnesi[158166]: 167 167
Nov 26 11:46:45 compute-0 systemd[1]: libpod-61e562abcda22676ae205b2ae8f5e0db867b975157a2f101fae32d57ed07bc49.scope: Deactivated successfully.
Nov 26 11:46:45 compute-0 podman[158118]: 2025-11-26 11:46:45.492278552 +0000 UTC m=+0.085744470 container died 61e562abcda22676ae205b2ae8f5e0db867b975157a2f101fae32d57ed07bc49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_agnesi, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Nov 26 11:46:45 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v346: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:46:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-460eace4e9b47657e1837c8c97002646928db042bc78d9d207489ef32689a056-merged.mount: Deactivated successfully.
Nov 26 11:46:45 compute-0 podman[158118]: 2025-11-26 11:46:45.511745565 +0000 UTC m=+0.105211482 container remove 61e562abcda22676ae205b2ae8f5e0db867b975157a2f101fae32d57ed07bc49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_agnesi, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 26 11:46:45 compute-0 podman[158118]: 2025-11-26 11:46:45.424153266 +0000 UTC m=+0.017619203 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:46:45 compute-0 systemd[1]: libpod-conmon-61e562abcda22676ae205b2ae8f5e0db867b975157a2f101fae32d57ed07bc49.scope: Deactivated successfully.
Nov 26 11:46:45 compute-0 sudo[158276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-juwhttoqdjjxhtquzvaewbnfezbmluki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157605.012855-381-34711397815303/AnsiballZ_copy.py'
Nov 26 11:46:45 compute-0 sudo[158276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:46:45 compute-0 podman[158252]: 2025-11-26 11:46:45.637383026 +0000 UTC m=+0.032620422 container create fdc64a84edb288efe5b1e297f5136a1497815654800d1efe94f2a00674bd8012 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kowalevski, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 26 11:46:45 compute-0 systemd[1]: Started libpod-conmon-fdc64a84edb288efe5b1e297f5136a1497815654800d1efe94f2a00674bd8012.scope.
Nov 26 11:46:45 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:46:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c38d4bd60030e710422b42a7168c2ae12f472799c2e250670cf6f2e4da443eb2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:46:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c38d4bd60030e710422b42a7168c2ae12f472799c2e250670cf6f2e4da443eb2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:46:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c38d4bd60030e710422b42a7168c2ae12f472799c2e250670cf6f2e4da443eb2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:46:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c38d4bd60030e710422b42a7168c2ae12f472799c2e250670cf6f2e4da443eb2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:46:45 compute-0 podman[158252]: 2025-11-26 11:46:45.706293833 +0000 UTC m=+0.101531230 container init fdc64a84edb288efe5b1e297f5136a1497815654800d1efe94f2a00674bd8012 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:46:45 compute-0 podman[158252]: 2025-11-26 11:46:45.711584821 +0000 UTC m=+0.106822216 container start fdc64a84edb288efe5b1e297f5136a1497815654800d1efe94f2a00674bd8012 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:46:45 compute-0 podman[158252]: 2025-11-26 11:46:45.71366044 +0000 UTC m=+0.108897836 container attach fdc64a84edb288efe5b1e297f5136a1497815654800d1efe94f2a00674bd8012 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:46:45 compute-0 podman[158252]: 2025-11-26 11:46:45.626837881 +0000 UTC m=+0.022075297 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:46:45 compute-0 sudo[158276]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:46 compute-0 sudo[158449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fforcsotfbvrdaofkztgkblxhyxaivma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157606.0323987-398-188046337866258/AnsiballZ_container_config_data.py'
Nov 26 11:46:46 compute-0 sudo[158449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:46:46 compute-0 optimistic_kowalevski[158284]: {
Nov 26 11:46:46 compute-0 optimistic_kowalevski[158284]:     "2627095b-eef8-4027-bfef-68bf7cb6801f": {
Nov 26 11:46:46 compute-0 optimistic_kowalevski[158284]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:46:46 compute-0 optimistic_kowalevski[158284]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 11:46:46 compute-0 optimistic_kowalevski[158284]:         "osd_id": 1,
Nov 26 11:46:46 compute-0 optimistic_kowalevski[158284]:         "osd_uuid": "2627095b-eef8-4027-bfef-68bf7cb6801f",
Nov 26 11:46:46 compute-0 optimistic_kowalevski[158284]:         "type": "bluestore"
Nov 26 11:46:46 compute-0 optimistic_kowalevski[158284]:     },
Nov 26 11:46:46 compute-0 optimistic_kowalevski[158284]:     "a9ad59a0-aa2e-4d92-b571-519d2d145b6a": {
Nov 26 11:46:46 compute-0 optimistic_kowalevski[158284]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:46:46 compute-0 optimistic_kowalevski[158284]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 11:46:46 compute-0 optimistic_kowalevski[158284]:         "osd_id": 0,
Nov 26 11:46:46 compute-0 optimistic_kowalevski[158284]:         "osd_uuid": "a9ad59a0-aa2e-4d92-b571-519d2d145b6a",
Nov 26 11:46:46 compute-0 optimistic_kowalevski[158284]:         "type": "bluestore"
Nov 26 11:46:46 compute-0 optimistic_kowalevski[158284]:     },
Nov 26 11:46:46 compute-0 optimistic_kowalevski[158284]:     "d56156fb-7361-4bef-b06b-1320109b4323": {
Nov 26 11:46:46 compute-0 optimistic_kowalevski[158284]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:46:46 compute-0 optimistic_kowalevski[158284]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 11:46:46 compute-0 optimistic_kowalevski[158284]:         "osd_id": 2,
Nov 26 11:46:46 compute-0 optimistic_kowalevski[158284]:         "osd_uuid": "d56156fb-7361-4bef-b06b-1320109b4323",
Nov 26 11:46:46 compute-0 optimistic_kowalevski[158284]:         "type": "bluestore"
Nov 26 11:46:46 compute-0 optimistic_kowalevski[158284]:     }
Nov 26 11:46:46 compute-0 optimistic_kowalevski[158284]: }
Nov 26 11:46:46 compute-0 systemd[1]: libpod-fdc64a84edb288efe5b1e297f5136a1497815654800d1efe94f2a00674bd8012.scope: Deactivated successfully.
Nov 26 11:46:46 compute-0 podman[158252]: 2025-11-26 11:46:46.477898122 +0000 UTC m=+0.873135518 container died fdc64a84edb288efe5b1e297f5136a1497815654800d1efe94f2a00674bd8012 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kowalevski, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:46:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-c38d4bd60030e710422b42a7168c2ae12f472799c2e250670cf6f2e4da443eb2-merged.mount: Deactivated successfully.
Nov 26 11:46:46 compute-0 python3.9[158451]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Nov 26 11:46:46 compute-0 podman[158252]: 2025-11-26 11:46:46.51165788 +0000 UTC m=+0.906895276 container remove fdc64a84edb288efe5b1e297f5136a1497815654800d1efe94f2a00674bd8012 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:46:46 compute-0 systemd[1]: libpod-conmon-fdc64a84edb288efe5b1e297f5136a1497815654800d1efe94f2a00674bd8012.scope: Deactivated successfully.
Nov 26 11:46:46 compute-0 sudo[158449]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:46 compute-0 sudo[158031]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:46 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 11:46:46 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:46:46 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 11:46:46 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:46:46 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev a639d218-0058-46c3-9333-0b7110a4a989 does not exist
Nov 26 11:46:46 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 6adf8eaa-381b-48fa-a4b8-b6fbe78b3f49 does not exist
Nov 26 11:46:46 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:46:46 compute-0 sudo[158487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:46:46 compute-0 sudo[158487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:46:46 compute-0 sudo[158487]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:46 compute-0 ceph-mon[74928]: pgmap v346: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:46:46 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:46:46 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:46:46 compute-0 sudo[158527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 26 11:46:46 compute-0 sudo[158527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:46:46 compute-0 sudo[158527]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:46 compute-0 sudo[158677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgazlfzrcclycfzsszehtzmkvbytikph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157606.6620903-407-244461228990353/AnsiballZ_container_config_hash.py'
Nov 26 11:46:46 compute-0 sudo[158677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:46:47 compute-0 python3.9[158679]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 26 11:46:47 compute-0 sudo[158677]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:47 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v347: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:46:47 compute-0 sudo[158829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmaxtimyybxjqudroayflkwcotrdkwef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157607.2678096-416-277638830932764/AnsiballZ_podman_container_info.py'
Nov 26 11:46:47 compute-0 sudo[158829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:46:47 compute-0 python3.9[158831]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 26 11:46:47 compute-0 sudo[158829]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:48 compute-0 ceph-mon[74928]: pgmap v347: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:46:48 compute-0 sudo[159000]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjxxandwlmuwkxocgpszdljbvsvzvhpp ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764157608.280053-429-118287833673591/AnsiballZ_edpm_container_manage.py'
Nov 26 11:46:48 compute-0 sudo[159000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:46:48 compute-0 python3[159002]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 26 11:46:49 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v348: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:46:50 compute-0 ceph-mon[74928]: pgmap v348: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:46:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 11:46:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:46:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 11:46:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:46:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:46:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:46:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:46:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:46:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:46:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:46:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:46:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:46:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 26 11:46:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:46:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:46:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:46:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 11:46:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:46:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 11:46:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:46:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:46:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:46:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 11:46:51 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v349: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:46:51 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:46:52 compute-0 ceph-mon[74928]: pgmap v349: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:46:53 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v350: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:46:54 compute-0 ceph-mon[74928]: pgmap v350: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:46:55 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v351: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:46:56 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:46:56 compute-0 ceph-mon[74928]: pgmap v351: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:46:57 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v352: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:46:57 compute-0 podman[159013]: 2025-11-26 11:46:57.823057864 +0000 UTC m=+8.824010703 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 26 11:46:57 compute-0 podman[159123]: 2025-11-26 11:46:57.914411139 +0000 UTC m=+0.028516291 container create 5f401b83ca5f86ed33da7d010b1da1561564f07ce95b2e756c93a36d59e54803 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 26 11:46:57 compute-0 podman[159123]: 2025-11-26 11:46:57.900199256 +0000 UTC m=+0.014304428 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 26 11:46:57 compute-0 python3[159002]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 26 11:46:57 compute-0 sudo[159000]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:58 compute-0 sudo[159300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjqjxiarsgfrglhkklcvnmptxhhexzib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157618.1619518-437-60704536323963/AnsiballZ_stat.py'
Nov 26 11:46:58 compute-0 sudo[159300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:46:58 compute-0 python3.9[159302]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 11:46:58 compute-0 sudo[159300]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:58 compute-0 ceph-mon[74928]: pgmap v352: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:46:59 compute-0 sudo[159454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbdlevtyfzuloubtchfwletwfzlihtuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157618.8195145-446-25799164934985/AnsiballZ_file.py'
Nov 26 11:46:59 compute-0 sudo[159454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:46:59 compute-0 python3.9[159456]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:46:59 compute-0 sudo[159454]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:59 compute-0 sudo[159530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqxvqouajufevfkjjpezwoivbouhynuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157618.8195145-446-25799164934985/AnsiballZ_stat.py'
Nov 26 11:46:59 compute-0 sudo[159530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:46:59 compute-0 python3.9[159532]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 11:46:59 compute-0 sudo[159530]: pam_unix(sudo:session): session closed for user root
Nov 26 11:46:59 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v353: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:46:59 compute-0 sudo[159681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjuooboyjxbqaeqqzdszfmieqjtefhwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157619.5301754-446-148394399518550/AnsiballZ_copy.py'
Nov 26 11:46:59 compute-0 sudo[159681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:46:59 compute-0 python3.9[159683]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764157619.5301754-446-148394399518550/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:46:59 compute-0 sudo[159681]: pam_unix(sudo:session): session closed for user root
Nov 26 11:47:00 compute-0 sudo[159757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bluivrkelbvnmudvydeygsscxbidupvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157619.5301754-446-148394399518550/AnsiballZ_systemd.py'
Nov 26 11:47:00 compute-0 sudo[159757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:47:00 compute-0 python3.9[159759]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 26 11:47:00 compute-0 systemd[1]: Reloading.
Nov 26 11:47:00 compute-0 systemd-rc-local-generator[159780]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:47:00 compute-0 systemd-sysv-generator[159783]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:47:00 compute-0 sudo[159757]: pam_unix(sudo:session): session closed for user root
Nov 26 11:47:00 compute-0 ceph-mon[74928]: pgmap v353: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:47:00 compute-0 sudo[159868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elpapclizpkvxlsmlsduebgtlnkjynrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157619.5301754-446-148394399518550/AnsiballZ_systemd.py'
Nov 26 11:47:00 compute-0 sudo[159868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:47:01 compute-0 python3.9[159870]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 11:47:01 compute-0 systemd[1]: Reloading.
Nov 26 11:47:01 compute-0 systemd-sysv-generator[159900]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:47:01 compute-0 systemd-rc-local-generator[159897]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:47:01 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Nov 26 11:47:01 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:47:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c1db8dc6f059605989c41a152f095d43a28466ca47c5f6df76e80cb0ef58463/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Nov 26 11:47:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c1db8dc6f059605989c41a152f095d43a28466ca47c5f6df76e80cb0ef58463/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 26 11:47:01 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 5f401b83ca5f86ed33da7d010b1da1561564f07ce95b2e756c93a36d59e54803.
Nov 26 11:47:01 compute-0 podman[159911]: 2025-11-26 11:47:01.39872661 +0000 UTC m=+0.081378425 container init 5f401b83ca5f86ed33da7d010b1da1561564f07ce95b2e756c93a36d59e54803 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 26 11:47:01 compute-0 ovn_metadata_agent[159923]: + sudo -E kolla_set_configs
Nov 26 11:47:01 compute-0 podman[159911]: 2025-11-26 11:47:01.413558161 +0000 UTC m=+0.096209956 container start 5f401b83ca5f86ed33da7d010b1da1561564f07ce95b2e756c93a36d59e54803 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 26 11:47:01 compute-0 edpm-start-podman-container[159911]: ovn_metadata_agent
Nov 26 11:47:01 compute-0 podman[159930]: 2025-11-26 11:47:01.460139512 +0000 UTC m=+0.037241548 container health_status 5f401b83ca5f86ed33da7d010b1da1561564f07ce95b2e756c93a36d59e54803 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:47:01 compute-0 edpm-start-podman-container[159910]: Creating additional drop-in dependency for "ovn_metadata_agent" (5f401b83ca5f86ed33da7d010b1da1561564f07ce95b2e756c93a36d59e54803)
Nov 26 11:47:01 compute-0 ovn_metadata_agent[159923]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 26 11:47:01 compute-0 ovn_metadata_agent[159923]: INFO:__main__:Validating config file
Nov 26 11:47:01 compute-0 ovn_metadata_agent[159923]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 26 11:47:01 compute-0 ovn_metadata_agent[159923]: INFO:__main__:Copying service configuration files
Nov 26 11:47:01 compute-0 ovn_metadata_agent[159923]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Nov 26 11:47:01 compute-0 ovn_metadata_agent[159923]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Nov 26 11:47:01 compute-0 ovn_metadata_agent[159923]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Nov 26 11:47:01 compute-0 ovn_metadata_agent[159923]: INFO:__main__:Writing out command to execute
Nov 26 11:47:01 compute-0 ovn_metadata_agent[159923]: INFO:__main__:Setting permission for /var/lib/neutron
Nov 26 11:47:01 compute-0 ovn_metadata_agent[159923]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Nov 26 11:47:01 compute-0 ovn_metadata_agent[159923]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Nov 26 11:47:01 compute-0 ovn_metadata_agent[159923]: INFO:__main__:Setting permission for /var/lib/neutron/external
Nov 26 11:47:01 compute-0 ovn_metadata_agent[159923]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Nov 26 11:47:01 compute-0 ovn_metadata_agent[159923]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Nov 26 11:47:01 compute-0 ovn_metadata_agent[159923]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Nov 26 11:47:01 compute-0 ovn_metadata_agent[159923]: ++ cat /run_command
Nov 26 11:47:01 compute-0 systemd[1]: Reloading.
Nov 26 11:47:01 compute-0 ovn_metadata_agent[159923]: + CMD=neutron-ovn-metadata-agent
Nov 26 11:47:01 compute-0 ovn_metadata_agent[159923]: + ARGS=
Nov 26 11:47:01 compute-0 ovn_metadata_agent[159923]: + sudo kolla_copy_cacerts
Nov 26 11:47:01 compute-0 ovn_metadata_agent[159923]: Running command: 'neutron-ovn-metadata-agent'
Nov 26 11:47:01 compute-0 ovn_metadata_agent[159923]: + [[ ! -n '' ]]
Nov 26 11:47:01 compute-0 ovn_metadata_agent[159923]: + . kolla_extend_start
Nov 26 11:47:01 compute-0 ovn_metadata_agent[159923]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Nov 26 11:47:01 compute-0 ovn_metadata_agent[159923]: + umask 0022
Nov 26 11:47:01 compute-0 ovn_metadata_agent[159923]: + exec neutron-ovn-metadata-agent
Nov 26 11:47:01 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v354: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:47:01 compute-0 systemd-rc-local-generator[159988]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:47:01 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:47:01 compute-0 systemd-sysv-generator[159991]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:47:01 compute-0 systemd[1]: Started ovn_metadata_agent container.
Nov 26 11:47:01 compute-0 sudo[159868]: pam_unix(sudo:session): session closed for user root
Nov 26 11:47:02 compute-0 sshd-session[150798]: Connection closed by 192.168.122.30 port 34390
Nov 26 11:47:02 compute-0 sshd-session[150795]: pam_unix(sshd:session): session closed for user zuul
Nov 26 11:47:02 compute-0 systemd-logind[744]: Session 47 logged out. Waiting for processes to exit.
Nov 26 11:47:02 compute-0 systemd[1]: session-47.scope: Deactivated successfully.
Nov 26 11:47:02 compute-0 systemd[1]: session-47.scope: Consumed 39.418s CPU time.
Nov 26 11:47:02 compute-0 systemd-logind[744]: Removed session 47.
Nov 26 11:47:02 compute-0 ceph-mon[74928]: pgmap v354: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.948 159928 INFO neutron.common.config [-] Logging enabled!
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.948 159928 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.948 159928 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.949 159928 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.949 159928 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.949 159928 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.949 159928 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.949 159928 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.949 159928 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.949 159928 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.949 159928 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.949 159928 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.950 159928 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.950 159928 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.950 159928 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.950 159928 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.950 159928 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.950 159928 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.950 159928 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.950 159928 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.950 159928 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.950 159928 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.951 159928 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.951 159928 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.951 159928 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.951 159928 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.951 159928 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.951 159928 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.951 159928 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.951 159928 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.951 159928 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.951 159928 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.952 159928 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.952 159928 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.952 159928 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.952 159928 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.952 159928 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.952 159928 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.952 159928 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.952 159928 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.952 159928 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.953 159928 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.953 159928 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.953 159928 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.953 159928 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.953 159928 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.953 159928 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.953 159928 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.953 159928 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.953 159928 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.953 159928 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.954 159928 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.954 159928 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.954 159928 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.954 159928 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.954 159928 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.954 159928 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.954 159928 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.954 159928 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.954 159928 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.954 159928 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.955 159928 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.955 159928 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.955 159928 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.955 159928 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.955 159928 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.955 159928 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.955 159928 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.955 159928 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.955 159928 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.955 159928 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.956 159928 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.956 159928 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.956 159928 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.956 159928 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.956 159928 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.956 159928 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.956 159928 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.956 159928 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.956 159928 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.956 159928 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.957 159928 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.957 159928 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.957 159928 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.957 159928 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.957 159928 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.957 159928 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.957 159928 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.957 159928 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.957 159928 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.957 159928 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.958 159928 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.958 159928 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.958 159928 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.958 159928 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.958 159928 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.958 159928 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.958 159928 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.958 159928 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.958 159928 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.958 159928 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.958 159928 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.959 159928 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.959 159928 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.959 159928 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.959 159928 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.959 159928 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.959 159928 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.959 159928 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.959 159928 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.959 159928 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.960 159928 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.960 159928 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.960 159928 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.960 159928 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.960 159928 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.960 159928 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.960 159928 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.960 159928 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.960 159928 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.960 159928 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.961 159928 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.961 159928 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.961 159928 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.961 159928 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.961 159928 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.961 159928 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.961 159928 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.961 159928 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.961 159928 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.961 159928 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.962 159928 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.962 159928 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.962 159928 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.962 159928 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.962 159928 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.962 159928 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.962 159928 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.962 159928 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.962 159928 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.963 159928 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.963 159928 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.963 159928 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.963 159928 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.963 159928 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.963 159928 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.963 159928 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.963 159928 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.963 159928 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.963 159928 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.964 159928 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.964 159928 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.964 159928 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.964 159928 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.964 159928 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.964 159928 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.964 159928 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.964 159928 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.964 159928 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.964 159928 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.965 159928 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.965 159928 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.965 159928 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.965 159928 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.965 159928 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.965 159928 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.965 159928 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.965 159928 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.965 159928 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.966 159928 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.966 159928 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.966 159928 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.966 159928 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.966 159928 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.966 159928 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.966 159928 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.966 159928 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.966 159928 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.966 159928 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.967 159928 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.967 159928 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.967 159928 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.967 159928 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.967 159928 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.967 159928 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.967 159928 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.967 159928 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.967 159928 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.967 159928 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.968 159928 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.968 159928 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.968 159928 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.968 159928 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.968 159928 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.968 159928 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.968 159928 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.968 159928 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.968 159928 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.969 159928 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.969 159928 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.969 159928 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.969 159928 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.969 159928 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.969 159928 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.969 159928 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.969 159928 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.969 159928 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.969 159928 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.970 159928 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.970 159928 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.970 159928 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.970 159928 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.970 159928 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.970 159928 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.970 159928 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.970 159928 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.970 159928 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.970 159928 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.971 159928 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.971 159928 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.971 159928 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.971 159928 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.971 159928 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.971 159928 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.971 159928 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.971 159928 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.971 159928 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.971 159928 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.972 159928 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.972 159928 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.972 159928 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.972 159928 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.972 159928 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.972 159928 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.972 159928 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.972 159928 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.972 159928 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.972 159928 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.973 159928 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.973 159928 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.973 159928 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.973 159928 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.973 159928 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.973 159928 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.973 159928 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.973 159928 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.973 159928 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.973 159928 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.974 159928 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.974 159928 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.974 159928 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.974 159928 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.974 159928 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.974 159928 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.974 159928 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.974 159928 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.974 159928 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.974 159928 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.975 159928 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.975 159928 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.975 159928 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.975 159928 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.975 159928 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.975 159928 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.975 159928 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.975 159928 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.975 159928 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.976 159928 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.976 159928 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.976 159928 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.976 159928 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.976 159928 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.976 159928 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.976 159928 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.976 159928 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.976 159928 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.976 159928 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.977 159928 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.977 159928 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.977 159928 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.977 159928 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.977 159928 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.977 159928 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.977 159928 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.977 159928 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.977 159928 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.977 159928 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.978 159928 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.978 159928 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.978 159928 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.978 159928 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.978 159928 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.978 159928 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.978 159928 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.978 159928 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.978 159928 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.979 159928 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.979 159928 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.979 159928 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.979 159928 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.979 159928 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.979 159928 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.979 159928 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.979 159928 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.986 159928 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.986 159928 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.986 159928 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.987 159928 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.987 159928 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Nov 26 11:47:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:02.997 159928 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 52e0423b-b2d6-4490-a138-5f72d3aa5a2d (UUID: 52e0423b-b2d6-4490-a138-5f72d3aa5a2d) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Nov 26 11:47:03 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:03.016 159928 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Nov 26 11:47:03 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:03.016 159928 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Nov 26 11:47:03 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:03.016 159928 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 26 11:47:03 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:03.016 159928 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 26 11:47:03 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:03.019 159928 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 26 11:47:03 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:03.024 159928 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 26 11:47:03 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:03.029 159928 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '52e0423b-b2d6-4490-a138-5f72d3aa5a2d'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f06fd5a3f40>], external_ids={}, name=52e0423b-b2d6-4490-a138-5f72d3aa5a2d, nb_cfg_timestamp=1764157579351, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 26 11:47:03 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:03.029 159928 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f06fd5a6b20>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Nov 26 11:47:03 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:03.030 159928 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 26 11:47:03 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:03.030 159928 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 26 11:47:03 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:03.030 159928 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 26 11:47:03 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:03.030 159928 INFO oslo_service.service [-] Starting 1 workers
Nov 26 11:47:03 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:03.033 159928 DEBUG oslo_service.service [-] Started child 160030 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Nov 26 11:47:03 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:03.036 160030 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-899738'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Nov 26 11:47:03 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:03.036 159928 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmp24lbe2di/privsep.sock']
Nov 26 11:47:03 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:03.052 160030 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Nov 26 11:47:03 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:03.052 160030 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Nov 26 11:47:03 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:03.052 160030 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 26 11:47:03 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:03.055 160030 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 26 11:47:03 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:03.060 160030 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 26 11:47:03 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:03.064 160030 INFO eventlet.wsgi.server [-] (160030) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Nov 26 11:47:03 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Nov 26 11:47:03 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v355: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:47:03 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:03.563 159928 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Nov 26 11:47:03 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:03.564 159928 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp24lbe2di/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Nov 26 11:47:03 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:03.486 160035 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 26 11:47:03 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:03.489 160035 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 26 11:47:03 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:03.491 160035 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Nov 26 11:47:03 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:03.491 160035 INFO oslo.privsep.daemon [-] privsep daemon running as pid 160035
Nov 26 11:47:03 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:03.566 160035 DEBUG oslo.privsep.daemon [-] privsep: reply[b02ef775-16ae-4fda-8e97-54d078fae1b0]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 26 11:47:03 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:03.984 160035 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 11:47:03 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:03.984 160035 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 11:47:03 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:03.984 160035 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.435 160035 DEBUG oslo.privsep.daemon [-] privsep: reply[4fbac2dc-7cda-491a-ae07-071c2b842b03]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.437 159928 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=52e0423b-b2d6-4490-a138-5f72d3aa5a2d, column=external_ids, values=({'neutron:ovn-metadata-id': '01b3b318-4ba3-5cd9-961f-1864972e968a'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.443 159928 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52e0423b-b2d6-4490-a138-5f72d3aa5a2d, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.449 159928 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.450 159928 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.450 159928 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.450 159928 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.450 159928 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.450 159928 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.450 159928 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.450 159928 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.450 159928 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.450 159928 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.450 159928 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.451 159928 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.451 159928 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.451 159928 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.451 159928 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.451 159928 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.451 159928 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.451 159928 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.451 159928 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.452 159928 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.452 159928 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.452 159928 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.452 159928 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.452 159928 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.452 159928 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.452 159928 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.452 159928 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.452 159928 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.453 159928 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.453 159928 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.453 159928 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.453 159928 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.453 159928 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.453 159928 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.453 159928 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.453 159928 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.453 159928 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.454 159928 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.454 159928 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.454 159928 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.454 159928 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.454 159928 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.454 159928 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.454 159928 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.454 159928 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.455 159928 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.455 159928 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.455 159928 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.455 159928 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.455 159928 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.455 159928 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.455 159928 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.455 159928 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.455 159928 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.455 159928 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.455 159928 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.456 159928 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.456 159928 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.456 159928 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.456 159928 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.456 159928 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.456 159928 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.456 159928 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.456 159928 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.456 159928 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.457 159928 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.457 159928 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.457 159928 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.457 159928 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.457 159928 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.457 159928 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.457 159928 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.457 159928 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.457 159928 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.457 159928 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.458 159928 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.458 159928 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.458 159928 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.458 159928 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.458 159928 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.458 159928 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.458 159928 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.458 159928 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.458 159928 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.458 159928 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.459 159928 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.459 159928 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.459 159928 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.459 159928 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.459 159928 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.459 159928 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.459 159928 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.459 159928 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.459 159928 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.459 159928 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.459 159928 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.460 159928 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.460 159928 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.460 159928 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.460 159928 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.460 159928 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.460 159928 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.460 159928 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.460 159928 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.460 159928 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.460 159928 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.461 159928 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.461 159928 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.461 159928 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.461 159928 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.461 159928 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.461 159928 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.461 159928 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.461 159928 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.461 159928 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.462 159928 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.462 159928 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.462 159928 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.462 159928 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.462 159928 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.462 159928 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.462 159928 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.462 159928 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.462 159928 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.463 159928 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.463 159928 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.463 159928 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.463 159928 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.463 159928 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.463 159928 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.463 159928 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.463 159928 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.463 159928 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.463 159928 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.464 159928 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.464 159928 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.464 159928 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.464 159928 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.464 159928 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.464 159928 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.464 159928 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.464 159928 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.464 159928 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.465 159928 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.465 159928 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.465 159928 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.465 159928 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.465 159928 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.465 159928 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.465 159928 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.465 159928 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.465 159928 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.465 159928 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.465 159928 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.466 159928 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.466 159928 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.466 159928 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.466 159928 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.466 159928 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.466 159928 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.466 159928 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.466 159928 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.466 159928 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.466 159928 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.466 159928 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.467 159928 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.467 159928 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.467 159928 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.467 159928 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.467 159928 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.467 159928 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.467 159928 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.467 159928 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.467 159928 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.467 159928 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.468 159928 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.468 159928 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.468 159928 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.468 159928 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.468 159928 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.468 159928 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.468 159928 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.468 159928 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.468 159928 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.468 159928 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.469 159928 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.469 159928 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.469 159928 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.469 159928 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.469 159928 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.469 159928 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.469 159928 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.469 159928 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.469 159928 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.469 159928 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.470 159928 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.470 159928 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.470 159928 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.470 159928 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.470 159928 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.470 159928 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.470 159928 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.470 159928 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.470 159928 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.470 159928 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.471 159928 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.471 159928 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.471 159928 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.471 159928 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.471 159928 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.471 159928 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.471 159928 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.471 159928 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.471 159928 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.471 159928 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.472 159928 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.472 159928 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.472 159928 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.472 159928 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.472 159928 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.472 159928 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.472 159928 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.472 159928 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.472 159928 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.472 159928 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.472 159928 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.473 159928 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.473 159928 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.473 159928 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.473 159928 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.473 159928 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.473 159928 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.473 159928 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.473 159928 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.473 159928 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.473 159928 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.474 159928 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.474 159928 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.474 159928 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.474 159928 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.474 159928 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.474 159928 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.474 159928 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.474 159928 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.474 159928 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.474 159928 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.475 159928 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.475 159928 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.475 159928 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.475 159928 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.475 159928 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.475 159928 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.475 159928 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.475 159928 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.476 159928 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.476 159928 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.476 159928 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.476 159928 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.476 159928 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.476 159928 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.476 159928 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.476 159928 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.476 159928 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.476 159928 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.477 159928 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.477 159928 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.477 159928 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.477 159928 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.477 159928 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.477 159928 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.477 159928 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.477 159928 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.477 159928 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.477 159928 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.478 159928 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.478 159928 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.478 159928 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.478 159928 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.478 159928 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.478 159928 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.478 159928 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.478 159928 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.478 159928 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.478 159928 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.479 159928 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.479 159928 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.479 159928 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.479 159928 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.479 159928 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.479 159928 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.479 159928 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.479 159928 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.479 159928 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.479 159928 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.480 159928 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.480 159928 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.480 159928 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.480 159928 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.480 159928 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.480 159928 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.480 159928 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.480 159928 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.480 159928 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:47:04 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:47:04.480 159928 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 26 11:47:04 compute-0 ceph-mon[74928]: pgmap v355: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:47:05 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v356: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:47:06 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:47:06 compute-0 ceph-mon[74928]: pgmap v356: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:47:07 compute-0 sshd-session[160040]: Accepted publickey for zuul from 192.168.122.30 port 51510 ssh2: ECDSA SHA256:Ri1FttUDYah1mI7cQP4QF8tE4Jqe/KzhJvhCRDggR5A
Nov 26 11:47:07 compute-0 systemd-logind[744]: New session 48 of user zuul.
Nov 26 11:47:07 compute-0 systemd[1]: Started Session 48 of User zuul.
Nov 26 11:47:07 compute-0 sshd-session[160040]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 26 11:47:07 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v357: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:47:08 compute-0 python3.9[160193]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 11:47:08 compute-0 ceph-mon[74928]: pgmap v357: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:47:08 compute-0 sudo[160347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjkddmessumvyczrprwnaoufroqrnosy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157628.5039773-34-167618607716082/AnsiballZ_command.py'
Nov 26 11:47:08 compute-0 sudo[160347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:47:08 compute-0 python3.9[160349]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:47:09 compute-0 sudo[160347]: pam_unix(sudo:session): session closed for user root
Nov 26 11:47:09 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v358: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:47:09 compute-0 sudo[160508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqrzgbuzcpfkzvfgnkscnonlmdjfhvki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157629.2494054-45-166465352912766/AnsiballZ_systemd_service.py'
Nov 26 11:47:09 compute-0 sudo[160508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:47:09 compute-0 python3.9[160510]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 26 11:47:09 compute-0 systemd[1]: Reloading.
Nov 26 11:47:09 compute-0 systemd-rc-local-generator[160531]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:47:09 compute-0 systemd-sysv-generator[160534]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:47:10 compute-0 sudo[160508]: pam_unix(sudo:session): session closed for user root
Nov 26 11:47:10 compute-0 ceph-mon[74928]: pgmap v358: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:47:10 compute-0 python3.9[160695]: ansible-ansible.builtin.service_facts Invoked
Nov 26 11:47:10 compute-0 network[160712]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 26 11:47:10 compute-0 network[160713]: 'network-scripts' will be removed from distribution in near future.
Nov 26 11:47:10 compute-0 network[160714]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 26 11:47:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:47:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:47:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:47:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:47:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:47:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:47:11 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v359: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:47:11 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:47:11 compute-0 podman[160753]: 2025-11-26 11:47:11.735829463 +0000 UTC m=+0.065264659 container health_status cfbeb1268b73bea59211ec8c025ee3818e11127880f6c0675b5fbea39bdd577e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller)
Nov 26 11:47:12 compute-0 ceph-mon[74928]: pgmap v359: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:47:12 compute-0 sudo[160999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnkztgxmuturctrzljdjodotjcwzzcmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157632.7351394-64-141158796222453/AnsiballZ_systemd_service.py'
Nov 26 11:47:12 compute-0 sudo[160999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:47:13 compute-0 python3.9[161001]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 11:47:13 compute-0 sudo[160999]: pam_unix(sudo:session): session closed for user root
Nov 26 11:47:13 compute-0 sudo[161152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxaihccezigkukyjpswxwaiedosaikot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157633.2852325-64-242271776154295/AnsiballZ_systemd_service.py'
Nov 26 11:47:13 compute-0 sudo[161152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:47:13 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v360: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:47:13 compute-0 python3.9[161154]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 11:47:13 compute-0 sudo[161152]: pam_unix(sudo:session): session closed for user root
Nov 26 11:47:14 compute-0 sudo[161305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pczzhnywfcieeyrmxbqlxqznvheuvfvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157633.8312201-64-130433193057173/AnsiballZ_systemd_service.py'
Nov 26 11:47:14 compute-0 sudo[161305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:47:14 compute-0 python3.9[161307]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 11:47:14 compute-0 sudo[161305]: pam_unix(sudo:session): session closed for user root
Nov 26 11:47:14 compute-0 sudo[161458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-koilrttyzoqwnrhvoufqivqdzxicgkdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157634.3733125-64-119480543629280/AnsiballZ_systemd_service.py'
Nov 26 11:47:14 compute-0 sudo[161458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:47:14 compute-0 ceph-mon[74928]: pgmap v360: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:47:14 compute-0 python3.9[161460]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 11:47:14 compute-0 sudo[161458]: pam_unix(sudo:session): session closed for user root
Nov 26 11:47:15 compute-0 sudo[161611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhczwluemyiubfzcmfuvtuzvwrhpsaew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157634.9058876-64-115820675169731/AnsiballZ_systemd_service.py'
Nov 26 11:47:15 compute-0 sudo[161611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:47:15 compute-0 python3.9[161613]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 11:47:15 compute-0 sudo[161611]: pam_unix(sudo:session): session closed for user root
Nov 26 11:47:15 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v361: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:47:15 compute-0 sudo[161764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojzcnyhvjkuzduidtheolcsuqdjksdre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157635.475131-64-227141575420400/AnsiballZ_systemd_service.py'
Nov 26 11:47:15 compute-0 sudo[161764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:47:15 compute-0 python3.9[161766]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 11:47:15 compute-0 sudo[161764]: pam_unix(sudo:session): session closed for user root
Nov 26 11:47:16 compute-0 sudo[161917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azlemlhvppniheymipjxprlqpjpxpbnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157636.0165703-64-226952910919105/AnsiballZ_systemd_service.py'
Nov 26 11:47:16 compute-0 sudo[161917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:47:16 compute-0 python3.9[161919]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 11:47:16 compute-0 sudo[161917]: pam_unix(sudo:session): session closed for user root
Nov 26 11:47:16 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:47:16 compute-0 ceph-mon[74928]: pgmap v361: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:47:17 compute-0 sudo[162070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhwlrouxpfuuxqzjkukyhcyonxgcwwyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157636.8253028-116-197617855315993/AnsiballZ_file.py'
Nov 26 11:47:17 compute-0 sudo[162070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:47:17 compute-0 python3.9[162072]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:47:17 compute-0 sudo[162070]: pam_unix(sudo:session): session closed for user root
Nov 26 11:47:17 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v362: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:47:17 compute-0 sudo[162222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjfprlrsewbanswpcgobnwontroakoic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157637.3808012-116-14069615556369/AnsiballZ_file.py'
Nov 26 11:47:17 compute-0 sudo[162222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:47:17 compute-0 python3.9[162224]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:47:17 compute-0 sudo[162222]: pam_unix(sudo:session): session closed for user root
Nov 26 11:47:17 compute-0 sudo[162374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejolrfpxjpexaikdzqjvbkacumfnaegh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157637.818315-116-272817880908783/AnsiballZ_file.py'
Nov 26 11:47:17 compute-0 sudo[162374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:47:18 compute-0 python3.9[162376]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:47:18 compute-0 sudo[162374]: pam_unix(sudo:session): session closed for user root
Nov 26 11:47:18 compute-0 sudo[162526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtqjcpnwsnpeudhhkhmeyaklttbtdzcj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157638.2350452-116-86805955065134/AnsiballZ_file.py'
Nov 26 11:47:18 compute-0 sudo[162526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:47:18 compute-0 python3.9[162528]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:47:18 compute-0 sudo[162526]: pam_unix(sudo:session): session closed for user root
Nov 26 11:47:18 compute-0 ceph-mon[74928]: pgmap v362: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:47:18 compute-0 sudo[162678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhshrctrsovoijnjrcqlvahlprkwdpfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157638.6463368-116-245379028955586/AnsiballZ_file.py'
Nov 26 11:47:18 compute-0 sudo[162678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:47:18 compute-0 python3.9[162680]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:47:18 compute-0 sudo[162678]: pam_unix(sudo:session): session closed for user root
Nov 26 11:47:19 compute-0 sudo[162830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-canqxlgdigjvkzvqmzzgrztkssrcuzsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157639.0675821-116-66935811903413/AnsiballZ_file.py'
Nov 26 11:47:19 compute-0 sudo[162830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:47:19 compute-0 python3.9[162832]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:47:19 compute-0 sudo[162830]: pam_unix(sudo:session): session closed for user root
Nov 26 11:47:19 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v363: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:47:19 compute-0 sudo[162982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmtrxzzrvulleqemkxsqeuipenxvagtf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157639.5066473-116-67052636459530/AnsiballZ_file.py'
Nov 26 11:47:19 compute-0 sudo[162982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:47:19 compute-0 python3.9[162984]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:47:19 compute-0 sudo[162982]: pam_unix(sudo:session): session closed for user root
Nov 26 11:47:20 compute-0 sudo[163134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eitdwbwhbgcfdujczarrmkeqznlofjnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157639.9537969-166-83447663856380/AnsiballZ_file.py'
Nov 26 11:47:20 compute-0 sudo[163134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:47:20 compute-0 python3.9[163136]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:47:20 compute-0 sudo[163134]: pam_unix(sudo:session): session closed for user root
Nov 26 11:47:20 compute-0 sudo[163286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxitpwkpqeonjfarmfnsazkqpvmvtaes ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157640.380981-166-225971575123043/AnsiballZ_file.py'
Nov 26 11:47:20 compute-0 sudo[163286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:47:20 compute-0 python3.9[163288]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:47:20 compute-0 ceph-mon[74928]: pgmap v363: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:47:20 compute-0 sudo[163286]: pam_unix(sudo:session): session closed for user root
Nov 26 11:47:20 compute-0 sudo[163438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbhhiuexsfbadkdzugaivzpvyinyhzim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157640.7953627-166-143038239373787/AnsiballZ_file.py'
Nov 26 11:47:20 compute-0 sudo[163438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:47:21 compute-0 python3.9[163440]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:47:21 compute-0 sudo[163438]: pam_unix(sudo:session): session closed for user root
Nov 26 11:47:21 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v364: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:47:21 compute-0 sudo[163590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qftfizfsdboulibfxysejwtjibwjntgy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157641.343323-166-250986091626055/AnsiballZ_file.py'
Nov 26 11:47:21 compute-0 sudo[163590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:47:21 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:47:21 compute-0 python3.9[163592]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:47:21 compute-0 sudo[163590]: pam_unix(sudo:session): session closed for user root
Nov 26 11:47:21 compute-0 sudo[163742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjmglwmbsmvrarvugpjsgzxaqibeaxpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157641.7686863-166-160737424681868/AnsiballZ_file.py'
Nov 26 11:47:21 compute-0 sudo[163742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:47:22 compute-0 python3.9[163744]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:47:22 compute-0 sudo[163742]: pam_unix(sudo:session): session closed for user root
Nov 26 11:47:22 compute-0 sudo[163894]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyftrjtzwcxiuhwwjodwvoarbidwbzji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157642.1967943-166-51308957223518/AnsiballZ_file.py'
Nov 26 11:47:22 compute-0 sudo[163894]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:47:22 compute-0 python3.9[163896]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:47:22 compute-0 sudo[163894]: pam_unix(sudo:session): session closed for user root
Nov 26 11:47:22 compute-0 ceph-mon[74928]: pgmap v364: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:47:22 compute-0 sudo[164046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfqoleyqvoxwqvxoumitghvvwstgtzlo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157642.6118312-166-196664603869468/AnsiballZ_file.py'
Nov 26 11:47:22 compute-0 sudo[164046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:47:22 compute-0 python3.9[164048]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:47:22 compute-0 sudo[164046]: pam_unix(sudo:session): session closed for user root
Nov 26 11:47:23 compute-0 sudo[164198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryihdwpwjnbcjgkybrdopoebxpvfjowj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157643.1269295-217-23787868100534/AnsiballZ_command.py'
Nov 26 11:47:23 compute-0 sudo[164198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:47:23 compute-0 python3.9[164200]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:47:23 compute-0 sudo[164198]: pam_unix(sudo:session): session closed for user root
Nov 26 11:47:23 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v365: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:47:24 compute-0 python3.9[164352]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 26 11:47:24 compute-0 sudo[164502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjafvhdzeurjpaeqtnllcejdnnvanoby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157644.2300847-235-9262105211638/AnsiballZ_systemd_service.py'
Nov 26 11:47:24 compute-0 sudo[164502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:47:24 compute-0 python3.9[164504]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 26 11:47:24 compute-0 systemd[1]: Reloading.
Nov 26 11:47:24 compute-0 ceph-mon[74928]: pgmap v365: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:47:24 compute-0 systemd-rc-local-generator[164527]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:47:24 compute-0 systemd-sysv-generator[164531]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:47:24 compute-0 sudo[164502]: pam_unix(sudo:session): session closed for user root
Nov 26 11:47:25 compute-0 sudo[164689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnqcxkyozmzhwwaeaugaqdppjfleabiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157645.0118663-243-54005800628969/AnsiballZ_command.py'
Nov 26 11:47:25 compute-0 sudo[164689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:47:25 compute-0 python3.9[164691]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:47:25 compute-0 sudo[164689]: pam_unix(sudo:session): session closed for user root
Nov 26 11:47:25 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v366: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:47:25 compute-0 sudo[164842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikgpepxyavqqxlqvnhjtbpahgfxdcxkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157645.461659-243-236753898301558/AnsiballZ_command.py'
Nov 26 11:47:25 compute-0 sudo[164842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:47:25 compute-0 python3.9[164844]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:47:25 compute-0 sudo[164842]: pam_unix(sudo:session): session closed for user root
Nov 26 11:47:26 compute-0 sudo[164995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcxrchtajygtqxwljsgzwucoawlzvwpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157645.903656-243-95531036847738/AnsiballZ_command.py'
Nov 26 11:47:26 compute-0 sudo[164995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:47:26 compute-0 python3.9[164997]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:47:26 compute-0 sudo[164995]: pam_unix(sudo:session): session closed for user root
Nov 26 11:47:26 compute-0 sudo[165148]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwvfhjvvnwlzymadaaefkzkkgwzbinok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157646.3293054-243-268223132558265/AnsiballZ_command.py'
Nov 26 11:47:26 compute-0 sudo[165148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:47:26 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:47:26 compute-0 python3.9[165150]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:47:26 compute-0 sudo[165148]: pam_unix(sudo:session): session closed for user root
Nov 26 11:47:26 compute-0 ceph-mon[74928]: pgmap v366: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:47:26 compute-0 sudo[165301]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkpikanbieedcgwtkhqiesofebnabcyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157646.7551708-243-83251457280271/AnsiballZ_command.py'
Nov 26 11:47:26 compute-0 sudo[165301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:47:27 compute-0 python3.9[165303]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:47:27 compute-0 sudo[165301]: pam_unix(sudo:session): session closed for user root
Nov 26 11:47:27 compute-0 sudo[165454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnfomhnvjxnbbsrdmsxeftoclhgyjagf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157647.1905456-243-194426803830746/AnsiballZ_command.py'
Nov 26 11:47:27 compute-0 sudo[165454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:47:27 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v367: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:47:27 compute-0 python3.9[165456]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:47:27 compute-0 sudo[165454]: pam_unix(sudo:session): session closed for user root
Nov 26 11:47:27 compute-0 sudo[165607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbztniwhottunyzjidnhtyqmmhhqvkch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157647.613973-243-53409709008986/AnsiballZ_command.py'
Nov 26 11:47:27 compute-0 sudo[165607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:47:27 compute-0 python3.9[165609]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:47:27 compute-0 sudo[165607]: pam_unix(sudo:session): session closed for user root
Nov 26 11:47:28 compute-0 sudo[165760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzupkdqxpqvvpvoiubcfwnnzkmokcixj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157648.2459407-297-89418413589186/AnsiballZ_getent.py'
Nov 26 11:47:28 compute-0 sudo[165760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:47:28 compute-0 ceph-mon[74928]: pgmap v367: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:47:28 compute-0 python3.9[165762]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Nov 26 11:47:28 compute-0 sudo[165760]: pam_unix(sudo:session): session closed for user root
Nov 26 11:47:29 compute-0 sudo[165913]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrbbmcovahphcdpobsjxzdgmadvttgok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157648.8614702-305-173773027485237/AnsiballZ_group.py'
Nov 26 11:47:29 compute-0 sudo[165913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:47:29 compute-0 python3.9[165915]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 26 11:47:29 compute-0 groupadd[165916]: group added to /etc/group: name=libvirt, GID=42473
Nov 26 11:47:29 compute-0 groupadd[165916]: group added to /etc/gshadow: name=libvirt
Nov 26 11:47:29 compute-0 groupadd[165916]: new group: name=libvirt, GID=42473
Nov 26 11:47:29 compute-0 sudo[165913]: pam_unix(sudo:session): session closed for user root
Nov 26 11:47:29 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v368: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:47:29 compute-0 sudo[166071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drbkqwxhsryvccwmlrgafzggllvtbjpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157649.4998155-313-163890846472899/AnsiballZ_user.py'
Nov 26 11:47:29 compute-0 sudo[166071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:47:30 compute-0 python3.9[166073]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 26 11:47:30 compute-0 useradd[166075]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Nov 26 11:47:30 compute-0 rsyslogd[960]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 11:47:30 compute-0 rsyslogd[960]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 11:47:30 compute-0 sudo[166071]: pam_unix(sudo:session): session closed for user root
Nov 26 11:47:30 compute-0 sudo[166232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rylbxfapiekdjldpjzrjtnepavitmffo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157650.3220754-324-205678759217972/AnsiballZ_setup.py'
Nov 26 11:47:30 compute-0 sudo[166232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:47:30 compute-0 ceph-mon[74928]: pgmap v368: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:47:30 compute-0 python3.9[166234]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 11:47:31 compute-0 sudo[166232]: pam_unix(sudo:session): session closed for user root
Nov 26 11:47:31 compute-0 sudo[166316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdhpqgseqzbmpndgjlwdcqnsryzysutp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157650.3220754-324-205678759217972/AnsiballZ_dnf.py'
Nov 26 11:47:31 compute-0 sudo[166316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:47:31 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v369: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:47:31 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:47:31 compute-0 ceph-mon[74928]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Nov 26 11:47:31 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:47:31.555461) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 26 11:47:31 compute-0 ceph-mon[74928]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Nov 26 11:47:31 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764157651555496, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1453, "num_deletes": 250, "total_data_size": 2296041, "memory_usage": 2326936, "flush_reason": "Manual Compaction"}
Nov 26 11:47:31 compute-0 ceph-mon[74928]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Nov 26 11:47:31 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764157651559594, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 1317469, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7383, "largest_seqno": 8835, "table_properties": {"data_size": 1312596, "index_size": 2205, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12559, "raw_average_key_size": 19, "raw_value_size": 1301721, "raw_average_value_size": 2049, "num_data_blocks": 105, "num_entries": 635, "num_filter_entries": 635, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764157495, "oldest_key_time": 1764157495, "file_creation_time": 1764157651, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "363c2a1d-8d28-40b7-a8ff-7233f1c9b7d5", "db_session_id": "CJT49RLFB1C6KNYXG0ER", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Nov 26 11:47:31 compute-0 ceph-mon[74928]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 4173 microseconds, and 3067 cpu microseconds.
Nov 26 11:47:31 compute-0 ceph-mon[74928]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 11:47:31 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:47:31.559615) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 1317469 bytes OK
Nov 26 11:47:31 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:47:31.559649) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Nov 26 11:47:31 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:47:31.560107) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Nov 26 11:47:31 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:47:31.560117) EVENT_LOG_v1 {"time_micros": 1764157651560114, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 26 11:47:31 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:47:31.560126) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 26 11:47:31 compute-0 ceph-mon[74928]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 2289652, prev total WAL file size 2289652, number of live WAL files 2.
Nov 26 11:47:31 compute-0 ceph-mon[74928]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 11:47:31 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:47:31.560675) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323531' seq:0, type:0; will stop at (end)
Nov 26 11:47:31 compute-0 ceph-mon[74928]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 26 11:47:31 compute-0 ceph-mon[74928]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(1286KB)], [20(7431KB)]
Nov 26 11:47:31 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764157651560692, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 8927676, "oldest_snapshot_seqno": -1}
Nov 26 11:47:31 compute-0 ceph-mon[74928]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3320 keys, 6870770 bytes, temperature: kUnknown
Nov 26 11:47:31 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764157651575721, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 6870770, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6845532, "index_size": 15849, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8325, "raw_key_size": 79639, "raw_average_key_size": 23, "raw_value_size": 6782479, "raw_average_value_size": 2042, "num_data_blocks": 704, "num_entries": 3320, "num_filter_entries": 3320, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764157079, "oldest_key_time": 0, "file_creation_time": 1764157651, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "363c2a1d-8d28-40b7-a8ff-7233f1c9b7d5", "db_session_id": "CJT49RLFB1C6KNYXG0ER", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Nov 26 11:47:31 compute-0 ceph-mon[74928]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 11:47:31 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:47:31.575922) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 6870770 bytes
Nov 26 11:47:31 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:47:31.576270) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 589.0 rd, 453.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 7.3 +0.0 blob) out(6.6 +0.0 blob), read-write-amplify(12.0) write-amplify(5.2) OK, records in: 3760, records dropped: 440 output_compression: NoCompression
Nov 26 11:47:31 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:47:31.576283) EVENT_LOG_v1 {"time_micros": 1764157651576277, "job": 6, "event": "compaction_finished", "compaction_time_micros": 15157, "compaction_time_cpu_micros": 11514, "output_level": 6, "num_output_files": 1, "total_output_size": 6870770, "num_input_records": 3760, "num_output_records": 3320, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 26 11:47:31 compute-0 ceph-mon[74928]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 11:47:31 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764157651576757, "job": 6, "event": "table_file_deletion", "file_number": 22}
Nov 26 11:47:31 compute-0 ceph-mon[74928]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 11:47:31 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764157651577630, "job": 6, "event": "table_file_deletion", "file_number": 20}
Nov 26 11:47:31 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:47:31.560587) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 11:47:31 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:47:31.577735) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 11:47:31 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:47:31.577738) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 11:47:31 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:47:31.577740) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 11:47:31 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:47:31.577740) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 11:47:31 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:47:31.577741) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 11:47:31 compute-0 python3.9[166318]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 11:47:31 compute-0 podman[166319]: 2025-11-26 11:47:31.620142502 +0000 UTC m=+0.040630342 container health_status 5f401b83ca5f86ed33da7d010b1da1561564f07ce95b2e756c93a36d59e54803 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent)
Nov 26 11:47:32 compute-0 ceph-mon[74928]: pgmap v369: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:47:33 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v370: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:47:34 compute-0 ceph-mon[74928]: pgmap v370: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:47:35 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v371: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:47:36 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:47:36 compute-0 ceph-mon[74928]: pgmap v371: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:47:37 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v372: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:47:38 compute-0 ceph-mon[74928]: pgmap v372: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:47:39 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v373: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:47:40 compute-0 ceph-mon[74928]: pgmap v373: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:47:41 compute-0 ceph-mgr[75197]: [balancer INFO root] Optimize plan auto_2025-11-26_11:47:41
Nov 26 11:47:41 compute-0 ceph-mgr[75197]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 11:47:41 compute-0 ceph-mgr[75197]: [balancer INFO root] do_upmap
Nov 26 11:47:41 compute-0 ceph-mgr[75197]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.log', '.rgw.root', '.mgr', 'cephfs.cephfs.meta', 'vms', 'backups', 'volumes', 'default.rgw.meta', 'default.rgw.control', 'images']
Nov 26 11:47:41 compute-0 ceph-mgr[75197]: [balancer INFO root] prepared 0/10 changes
Nov 26 11:47:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:47:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:47:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:47:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:47:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:47:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:47:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 11:47:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 11:47:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 11:47:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 11:47:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 11:47:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 11:47:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 11:47:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 11:47:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 11:47:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 11:47:41 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v374: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:47:41 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:47:42 compute-0 podman[166522]: 2025-11-26 11:47:42.634180203 +0000 UTC m=+0.057831006 container health_status cfbeb1268b73bea59211ec8c025ee3818e11127880f6c0675b5fbea39bdd577e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 26 11:47:42 compute-0 ceph-mon[74928]: pgmap v374: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:47:43 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v375: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:47:44 compute-0 ceph-mon[74928]: pgmap v375: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:47:45 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v376: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:47:46 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:47:46 compute-0 sudo[166552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:47:46 compute-0 sudo[166552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:47:46 compute-0 sudo[166552]: pam_unix(sudo:session): session closed for user root
Nov 26 11:47:46 compute-0 sudo[166577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:47:46 compute-0 sudo[166577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:47:46 compute-0 sudo[166577]: pam_unix(sudo:session): session closed for user root
Nov 26 11:47:46 compute-0 sudo[166602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:47:46 compute-0 sudo[166602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:47:46 compute-0 ceph-mon[74928]: pgmap v376: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:47:46 compute-0 sudo[166602]: pam_unix(sudo:session): session closed for user root
Nov 26 11:47:46 compute-0 sudo[166627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 26 11:47:46 compute-0 sudo[166627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:47:47 compute-0 sudo[166627]: pam_unix(sudo:session): session closed for user root
Nov 26 11:47:47 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 11:47:47 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:47:47 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 11:47:47 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 11:47:47 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 11:47:47 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:47:47 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 7779d53a-bbdf-4b48-87fc-673c6f5b5cf1 does not exist
Nov 26 11:47:47 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 2c3de274-ba76-49c0-b507-a225009c970b does not exist
Nov 26 11:47:47 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 0c2c9858-2a23-4ada-aa8c-d8abba1fe3e6 does not exist
Nov 26 11:47:47 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 11:47:47 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 11:47:47 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 11:47:47 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 11:47:47 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 11:47:47 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:47:47 compute-0 sudo[166680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:47:47 compute-0 sudo[166680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:47:47 compute-0 sudo[166680]: pam_unix(sudo:session): session closed for user root
Nov 26 11:47:47 compute-0 sudo[166705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:47:47 compute-0 sudo[166705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:47:47 compute-0 sudo[166705]: pam_unix(sudo:session): session closed for user root
Nov 26 11:47:47 compute-0 sudo[166730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:47:47 compute-0 sudo[166730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:47:47 compute-0 sudo[166730]: pam_unix(sudo:session): session closed for user root
Nov 26 11:47:47 compute-0 sudo[166755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 26 11:47:47 compute-0 sudo[166755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:47:47 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v377: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:47:47 compute-0 podman[166812]: 2025-11-26 11:47:47.538333284 +0000 UTC m=+0.026866592 container create e19c1fce0d244087416dd95bec63efcb54d66ab5a7c8869c0875afccebcd5f97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 26 11:47:47 compute-0 systemd[1]: Started libpod-conmon-e19c1fce0d244087416dd95bec63efcb54d66ab5a7c8869c0875afccebcd5f97.scope.
Nov 26 11:47:47 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:47:47 compute-0 podman[166812]: 2025-11-26 11:47:47.587089219 +0000 UTC m=+0.075622537 container init e19c1fce0d244087416dd95bec63efcb54d66ab5a7c8869c0875afccebcd5f97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 26 11:47:47 compute-0 podman[166812]: 2025-11-26 11:47:47.591881335 +0000 UTC m=+0.080414643 container start e19c1fce0d244087416dd95bec63efcb54d66ab5a7c8869c0875afccebcd5f97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 26 11:47:47 compute-0 podman[166812]: 2025-11-26 11:47:47.593145819 +0000 UTC m=+0.081679137 container attach e19c1fce0d244087416dd95bec63efcb54d66ab5a7c8869c0875afccebcd5f97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_yalow, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 26 11:47:47 compute-0 epic_yalow[166825]: 167 167
Nov 26 11:47:47 compute-0 systemd[1]: libpod-e19c1fce0d244087416dd95bec63efcb54d66ab5a7c8869c0875afccebcd5f97.scope: Deactivated successfully.
Nov 26 11:47:47 compute-0 podman[166812]: 2025-11-26 11:47:47.600629501 +0000 UTC m=+0.089162809 container died e19c1fce0d244087416dd95bec63efcb54d66ab5a7c8869c0875afccebcd5f97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:47:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-63b296d8dbfaa897d8a140cc27935712ddbbc31f1514bbbbb9c32b8d1f9c9cc7-merged.mount: Deactivated successfully.
Nov 26 11:47:47 compute-0 podman[166812]: 2025-11-26 11:47:47.62002337 +0000 UTC m=+0.108556678 container remove e19c1fce0d244087416dd95bec63efcb54d66ab5a7c8869c0875afccebcd5f97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_yalow, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 26 11:47:47 compute-0 podman[166812]: 2025-11-26 11:47:47.526742358 +0000 UTC m=+0.015275685 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:47:47 compute-0 systemd[1]: libpod-conmon-e19c1fce0d244087416dd95bec63efcb54d66ab5a7c8869c0875afccebcd5f97.scope: Deactivated successfully.
Nov 26 11:47:47 compute-0 podman[166847]: 2025-11-26 11:47:47.750149839 +0000 UTC m=+0.035042658 container create bf7bab9ba2225287b152f407848ea5c1773caa850b93e311e3dc3b0874a081f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_volhard, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 26 11:47:47 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:47:47 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 11:47:47 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:47:47 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 11:47:47 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 11:47:47 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:47:47 compute-0 systemd[1]: Started libpod-conmon-bf7bab9ba2225287b152f407848ea5c1773caa850b93e311e3dc3b0874a081f8.scope.
Nov 26 11:47:47 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:47:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c61c1bd9d016e1aa48f9a1f0fbe815b87e434dff1788b815f8ce125b4e2fb09b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:47:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c61c1bd9d016e1aa48f9a1f0fbe815b87e434dff1788b815f8ce125b4e2fb09b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:47:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c61c1bd9d016e1aa48f9a1f0fbe815b87e434dff1788b815f8ce125b4e2fb09b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:47:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c61c1bd9d016e1aa48f9a1f0fbe815b87e434dff1788b815f8ce125b4e2fb09b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:47:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c61c1bd9d016e1aa48f9a1f0fbe815b87e434dff1788b815f8ce125b4e2fb09b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 11:47:47 compute-0 podman[166847]: 2025-11-26 11:47:47.813486778 +0000 UTC m=+0.098379607 container init bf7bab9ba2225287b152f407848ea5c1773caa850b93e311e3dc3b0874a081f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_volhard, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:47:47 compute-0 podman[166847]: 2025-11-26 11:47:47.818259828 +0000 UTC m=+0.103152657 container start bf7bab9ba2225287b152f407848ea5c1773caa850b93e311e3dc3b0874a081f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_volhard, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 26 11:47:47 compute-0 podman[166847]: 2025-11-26 11:47:47.819357718 +0000 UTC m=+0.104250548 container attach bf7bab9ba2225287b152f407848ea5c1773caa850b93e311e3dc3b0874a081f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_volhard, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 26 11:47:47 compute-0 podman[166847]: 2025-11-26 11:47:47.738848719 +0000 UTC m=+0.023741579 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:47:48 compute-0 serene_volhard[166860]: --> passed data devices: 0 physical, 3 LVM
Nov 26 11:47:48 compute-0 serene_volhard[166860]: --> relative data size: 1.0
Nov 26 11:47:48 compute-0 serene_volhard[166860]: --> All data devices are unavailable
Nov 26 11:47:48 compute-0 systemd[1]: libpod-bf7bab9ba2225287b152f407848ea5c1773caa850b93e311e3dc3b0874a081f8.scope: Deactivated successfully.
Nov 26 11:47:48 compute-0 podman[166847]: 2025-11-26 11:47:48.650625391 +0000 UTC m=+0.935518230 container died bf7bab9ba2225287b152f407848ea5c1773caa850b93e311e3dc3b0874a081f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 26 11:47:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-c61c1bd9d016e1aa48f9a1f0fbe815b87e434dff1788b815f8ce125b4e2fb09b-merged.mount: Deactivated successfully.
Nov 26 11:47:48 compute-0 podman[166847]: 2025-11-26 11:47:48.684418171 +0000 UTC m=+0.969311000 container remove bf7bab9ba2225287b152f407848ea5c1773caa850b93e311e3dc3b0874a081f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_volhard, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:47:48 compute-0 systemd[1]: libpod-conmon-bf7bab9ba2225287b152f407848ea5c1773caa850b93e311e3dc3b0874a081f8.scope: Deactivated successfully.
Nov 26 11:47:48 compute-0 sudo[166755]: pam_unix(sudo:session): session closed for user root
Nov 26 11:47:48 compute-0 sudo[166899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:47:48 compute-0 sudo[166899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:47:48 compute-0 sudo[166899]: pam_unix(sudo:session): session closed for user root
Nov 26 11:47:48 compute-0 ceph-mon[74928]: pgmap v377: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:47:48 compute-0 sudo[166924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:47:48 compute-0 sudo[166924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:47:48 compute-0 sudo[166924]: pam_unix(sudo:session): session closed for user root
Nov 26 11:47:48 compute-0 sudo[166949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:47:48 compute-0 sudo[166949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:47:48 compute-0 sudo[166949]: pam_unix(sudo:session): session closed for user root
Nov 26 11:47:48 compute-0 sudo[166974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -- lvm list --format json
Nov 26 11:47:48 compute-0 sudo[166974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:47:49 compute-0 podman[167030]: 2025-11-26 11:47:49.109026651 +0000 UTC m=+0.028100478 container create 5ae1db742b61d49d6c0aded958a13575f0bc60d2b7a99f796a10e688b0d9254c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 26 11:47:49 compute-0 systemd[1]: Started libpod-conmon-5ae1db742b61d49d6c0aded958a13575f0bc60d2b7a99f796a10e688b0d9254c.scope.
Nov 26 11:47:49 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:47:49 compute-0 podman[167030]: 2025-11-26 11:47:49.158922404 +0000 UTC m=+0.077996242 container init 5ae1db742b61d49d6c0aded958a13575f0bc60d2b7a99f796a10e688b0d9254c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bhabha, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Nov 26 11:47:49 compute-0 podman[167030]: 2025-11-26 11:47:49.165014893 +0000 UTC m=+0.084088720 container start 5ae1db742b61d49d6c0aded958a13575f0bc60d2b7a99f796a10e688b0d9254c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bhabha, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:47:49 compute-0 podman[167030]: 2025-11-26 11:47:49.166528166 +0000 UTC m=+0.085601983 container attach 5ae1db742b61d49d6c0aded958a13575f0bc60d2b7a99f796a10e688b0d9254c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bhabha, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 26 11:47:49 compute-0 mystifying_bhabha[167043]: 167 167
Nov 26 11:47:49 compute-0 systemd[1]: libpod-5ae1db742b61d49d6c0aded958a13575f0bc60d2b7a99f796a10e688b0d9254c.scope: Deactivated successfully.
Nov 26 11:47:49 compute-0 conmon[167043]: conmon 5ae1db742b61d49d6c0a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5ae1db742b61d49d6c0aded958a13575f0bc60d2b7a99f796a10e688b0d9254c.scope/container/memory.events
Nov 26 11:47:49 compute-0 podman[167030]: 2025-11-26 11:47:49.168996209 +0000 UTC m=+0.088070047 container died 5ae1db742b61d49d6c0aded958a13575f0bc60d2b7a99f796a10e688b0d9254c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bhabha, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 26 11:47:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-e473a212421571e45062f6d9e931743e385d1f60b55a3104dccefa5734436f6e-merged.mount: Deactivated successfully.
Nov 26 11:47:49 compute-0 podman[167030]: 2025-11-26 11:47:49.187508015 +0000 UTC m=+0.106581843 container remove 5ae1db742b61d49d6c0aded958a13575f0bc60d2b7a99f796a10e688b0d9254c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:47:49 compute-0 podman[167030]: 2025-11-26 11:47:49.097455031 +0000 UTC m=+0.016528878 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:47:49 compute-0 systemd[1]: libpod-conmon-5ae1db742b61d49d6c0aded958a13575f0bc60d2b7a99f796a10e688b0d9254c.scope: Deactivated successfully.
Nov 26 11:47:49 compute-0 podman[167065]: 2025-11-26 11:47:49.304795164 +0000 UTC m=+0.027605665 container create e1896aa1046a5a4b1661ddbe75e14ef83aab28103ad5067361fbfd04e5afb268 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:47:49 compute-0 systemd[1]: Started libpod-conmon-e1896aa1046a5a4b1661ddbe75e14ef83aab28103ad5067361fbfd04e5afb268.scope.
Nov 26 11:47:49 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:47:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf4eacd3df79a092f8da0a18aae2e66eb281d7b91a9c3d1a20134bcfa4db3f96/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:47:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf4eacd3df79a092f8da0a18aae2e66eb281d7b91a9c3d1a20134bcfa4db3f96/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:47:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf4eacd3df79a092f8da0a18aae2e66eb281d7b91a9c3d1a20134bcfa4db3f96/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:47:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf4eacd3df79a092f8da0a18aae2e66eb281d7b91a9c3d1a20134bcfa4db3f96/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:47:49 compute-0 podman[167065]: 2025-11-26 11:47:49.364474798 +0000 UTC m=+0.087285308 container init e1896aa1046a5a4b1661ddbe75e14ef83aab28103ad5067361fbfd04e5afb268 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_diffie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:47:49 compute-0 podman[167065]: 2025-11-26 11:47:49.369500925 +0000 UTC m=+0.092311426 container start e1896aa1046a5a4b1661ddbe75e14ef83aab28103ad5067361fbfd04e5afb268 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_diffie, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:47:49 compute-0 podman[167065]: 2025-11-26 11:47:49.370689766 +0000 UTC m=+0.093500267 container attach e1896aa1046a5a4b1661ddbe75e14ef83aab28103ad5067361fbfd04e5afb268 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_diffie, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 26 11:47:49 compute-0 podman[167065]: 2025-11-26 11:47:49.293577643 +0000 UTC m=+0.016388163 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:47:49 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v378: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]: {
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:     "0": [
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:         {
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:             "devices": [
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:                 "/dev/loop3"
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:             ],
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:             "lv_name": "ceph_lv0",
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:             "lv_size": "21470642176",
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a9ad59a0-aa2e-4d92-b571-519d2d145b6a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:             "lv_uuid": "Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ",
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:             "name": "ceph_lv0",
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:             "tags": {
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:                 "ceph.block_uuid": "Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ",
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:                 "ceph.cluster_name": "ceph",
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:                 "ceph.crush_device_class": "",
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:                 "ceph.encrypted": "0",
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:                 "ceph.osd_fsid": "a9ad59a0-aa2e-4d92-b571-519d2d145b6a",
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:                 "ceph.osd_id": "0",
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:                 "ceph.type": "block",
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:                 "ceph.vdo": "0"
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:             },
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:             "type": "block",
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:             "vg_name": "ceph_vg0"
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:         }
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:     ],
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:     "1": [
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:         {
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:             "devices": [
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:                 "/dev/loop4"
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:             ],
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:             "lv_name": "ceph_lv1",
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:             "lv_size": "21470642176",
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2627095b-eef8-4027-bfef-68bf7cb6801f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:             "lv_uuid": "T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS",
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:             "name": "ceph_lv1",
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:             "tags": {
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:                 "ceph.block_uuid": "T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS",
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:                 "ceph.cluster_name": "ceph",
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:                 "ceph.crush_device_class": "",
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:                 "ceph.encrypted": "0",
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:                 "ceph.osd_fsid": "2627095b-eef8-4027-bfef-68bf7cb6801f",
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:                 "ceph.osd_id": "1",
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:                 "ceph.type": "block",
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:                 "ceph.vdo": "0"
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:             },
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:             "type": "block",
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:             "vg_name": "ceph_vg1"
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:         }
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:     ],
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:     "2": [
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:         {
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:             "devices": [
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:                 "/dev/loop5"
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:             ],
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:             "lv_name": "ceph_lv2",
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:             "lv_size": "21470642176",
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d56156fb-7361-4bef-b06b-1320109b4323,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:             "lv_uuid": "PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF",
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:             "name": "ceph_lv2",
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:             "tags": {
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:                 "ceph.block_uuid": "PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF",
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:                 "ceph.cluster_name": "ceph",
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:                 "ceph.crush_device_class": "",
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:                 "ceph.encrypted": "0",
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:                 "ceph.osd_fsid": "d56156fb-7361-4bef-b06b-1320109b4323",
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:                 "ceph.osd_id": "2",
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:                 "ceph.type": "block",
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:                 "ceph.vdo": "0"
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:             },
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:             "type": "block",
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:             "vg_name": "ceph_vg2"
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:         }
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]:     ]
Nov 26 11:47:49 compute-0 eloquent_diffie[167078]: }
Nov 26 11:47:49 compute-0 systemd[1]: libpod-e1896aa1046a5a4b1661ddbe75e14ef83aab28103ad5067361fbfd04e5afb268.scope: Deactivated successfully.
Nov 26 11:47:49 compute-0 podman[167065]: 2025-11-26 11:47:49.99222822 +0000 UTC m=+0.715038720 container died e1896aa1046a5a4b1661ddbe75e14ef83aab28103ad5067361fbfd04e5afb268 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:47:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf4eacd3df79a092f8da0a18aae2e66eb281d7b91a9c3d1a20134bcfa4db3f96-merged.mount: Deactivated successfully.
Nov 26 11:47:50 compute-0 podman[167065]: 2025-11-26 11:47:50.024849632 +0000 UTC m=+0.747660132 container remove e1896aa1046a5a4b1661ddbe75e14ef83aab28103ad5067361fbfd04e5afb268 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_diffie, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 26 11:47:50 compute-0 systemd[1]: libpod-conmon-e1896aa1046a5a4b1661ddbe75e14ef83aab28103ad5067361fbfd04e5afb268.scope: Deactivated successfully.
Nov 26 11:47:50 compute-0 sudo[166974]: pam_unix(sudo:session): session closed for user root
Nov 26 11:47:50 compute-0 sudo[167097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:47:50 compute-0 sudo[167097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:47:50 compute-0 sudo[167097]: pam_unix(sudo:session): session closed for user root
Nov 26 11:47:50 compute-0 sudo[167122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:47:50 compute-0 sudo[167122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:47:50 compute-0 sudo[167122]: pam_unix(sudo:session): session closed for user root
Nov 26 11:47:50 compute-0 sudo[167147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:47:50 compute-0 sudo[167147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:47:50 compute-0 sudo[167147]: pam_unix(sudo:session): session closed for user root
Nov 26 11:47:50 compute-0 sudo[167172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -- raw list --format json
Nov 26 11:47:50 compute-0 sudo[167172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:47:50 compute-0 podman[167228]: 2025-11-26 11:47:50.434103798 +0000 UTC m=+0.024980845 container create 6283c7dfd32202f785fbd10031e889378d3c9cae43c0b94232f562c1a6121366 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_snyder, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 26 11:47:50 compute-0 systemd[1]: Started libpod-conmon-6283c7dfd32202f785fbd10031e889378d3c9cae43c0b94232f562c1a6121366.scope.
Nov 26 11:47:50 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:47:50 compute-0 podman[167228]: 2025-11-26 11:47:50.487923602 +0000 UTC m=+0.078800668 container init 6283c7dfd32202f785fbd10031e889378d3c9cae43c0b94232f562c1a6121366 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:47:50 compute-0 podman[167228]: 2025-11-26 11:47:50.492392128 +0000 UTC m=+0.083269175 container start 6283c7dfd32202f785fbd10031e889378d3c9cae43c0b94232f562c1a6121366 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_snyder, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 26 11:47:50 compute-0 podman[167228]: 2025-11-26 11:47:50.493590518 +0000 UTC m=+0.084467564 container attach 6283c7dfd32202f785fbd10031e889378d3c9cae43c0b94232f562c1a6121366 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_snyder, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 26 11:47:50 compute-0 sweet_snyder[167242]: 167 167
Nov 26 11:47:50 compute-0 systemd[1]: libpod-6283c7dfd32202f785fbd10031e889378d3c9cae43c0b94232f562c1a6121366.scope: Deactivated successfully.
Nov 26 11:47:50 compute-0 conmon[167242]: conmon 6283c7dfd32202f785fb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6283c7dfd32202f785fbd10031e889378d3c9cae43c0b94232f562c1a6121366.scope/container/memory.events
Nov 26 11:47:50 compute-0 podman[167228]: 2025-11-26 11:47:50.495544151 +0000 UTC m=+0.086421199 container died 6283c7dfd32202f785fbd10031e889378d3c9cae43c0b94232f562c1a6121366 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_snyder, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:47:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b0fe0c3ed11e2b6c520c90778bea1bbec99e65692520020854db5daea4f7bf2-merged.mount: Deactivated successfully.
Nov 26 11:47:50 compute-0 podman[167228]: 2025-11-26 11:47:50.514995309 +0000 UTC m=+0.105872356 container remove 6283c7dfd32202f785fbd10031e889378d3c9cae43c0b94232f562c1a6121366 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:47:50 compute-0 podman[167228]: 2025-11-26 11:47:50.423642833 +0000 UTC m=+0.014519900 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:47:50 compute-0 systemd[1]: libpod-conmon-6283c7dfd32202f785fbd10031e889378d3c9cae43c0b94232f562c1a6121366.scope: Deactivated successfully.
Nov 26 11:47:50 compute-0 podman[167263]: 2025-11-26 11:47:50.635347467 +0000 UTC m=+0.027497411 container create 95e6ec360da99af4b27549f1a4e5bace9636d0b2a2d631fca7a8555db664918a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_hellman, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:47:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 11:47:50 compute-0 systemd[1]: Started libpod-conmon-95e6ec360da99af4b27549f1a4e5bace9636d0b2a2d631fca7a8555db664918a.scope.
Nov 26 11:47:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:47:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 11:47:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:47:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:47:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:47:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:47:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:47:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:47:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:47:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:47:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:47:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 26 11:47:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:47:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:47:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:47:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 11:47:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:47:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 11:47:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:47:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:47:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:47:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 11:47:50 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:47:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb985701b626f56b3f981e66734a6ab2827444802e15bcd6e9f8711ffa6946ea/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:47:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb985701b626f56b3f981e66734a6ab2827444802e15bcd6e9f8711ffa6946ea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:47:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb985701b626f56b3f981e66734a6ab2827444802e15bcd6e9f8711ffa6946ea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:47:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb985701b626f56b3f981e66734a6ab2827444802e15bcd6e9f8711ffa6946ea/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:47:50 compute-0 podman[167263]: 2025-11-26 11:47:50.686891559 +0000 UTC m=+0.079041503 container init 95e6ec360da99af4b27549f1a4e5bace9636d0b2a2d631fca7a8555db664918a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_hellman, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 26 11:47:50 compute-0 podman[167263]: 2025-11-26 11:47:50.692018878 +0000 UTC m=+0.084168821 container start 95e6ec360da99af4b27549f1a4e5bace9636d0b2a2d631fca7a8555db664918a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_hellman, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:47:50 compute-0 podman[167263]: 2025-11-26 11:47:50.693103974 +0000 UTC m=+0.085253917 container attach 95e6ec360da99af4b27549f1a4e5bace9636d0b2a2d631fca7a8555db664918a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_hellman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:47:50 compute-0 podman[167263]: 2025-11-26 11:47:50.623784273 +0000 UTC m=+0.015934226 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:47:50 compute-0 ceph-mon[74928]: pgmap v378: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:47:51 compute-0 kernel: SELinux:  Converting 2768 SID table entries...
Nov 26 11:47:51 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 26 11:47:51 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 26 11:47:51 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 26 11:47:51 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 26 11:47:51 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 26 11:47:51 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 26 11:47:51 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 26 11:47:51 compute-0 modest_hellman[167276]: {
Nov 26 11:47:51 compute-0 modest_hellman[167276]:     "2627095b-eef8-4027-bfef-68bf7cb6801f": {
Nov 26 11:47:51 compute-0 modest_hellman[167276]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:47:51 compute-0 modest_hellman[167276]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 11:47:51 compute-0 modest_hellman[167276]:         "osd_id": 1,
Nov 26 11:47:51 compute-0 modest_hellman[167276]:         "osd_uuid": "2627095b-eef8-4027-bfef-68bf7cb6801f",
Nov 26 11:47:51 compute-0 modest_hellman[167276]:         "type": "bluestore"
Nov 26 11:47:51 compute-0 modest_hellman[167276]:     },
Nov 26 11:47:51 compute-0 modest_hellman[167276]:     "a9ad59a0-aa2e-4d92-b571-519d2d145b6a": {
Nov 26 11:47:51 compute-0 modest_hellman[167276]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:47:51 compute-0 modest_hellman[167276]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 11:47:51 compute-0 modest_hellman[167276]:         "osd_id": 0,
Nov 26 11:47:51 compute-0 modest_hellman[167276]:         "osd_uuid": "a9ad59a0-aa2e-4d92-b571-519d2d145b6a",
Nov 26 11:47:51 compute-0 modest_hellman[167276]:         "type": "bluestore"
Nov 26 11:47:51 compute-0 modest_hellman[167276]:     },
Nov 26 11:47:51 compute-0 modest_hellman[167276]:     "d56156fb-7361-4bef-b06b-1320109b4323": {
Nov 26 11:47:51 compute-0 modest_hellman[167276]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:47:51 compute-0 modest_hellman[167276]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 11:47:51 compute-0 modest_hellman[167276]:         "osd_id": 2,
Nov 26 11:47:51 compute-0 modest_hellman[167276]:         "osd_uuid": "d56156fb-7361-4bef-b06b-1320109b4323",
Nov 26 11:47:51 compute-0 modest_hellman[167276]:         "type": "bluestore"
Nov 26 11:47:51 compute-0 modest_hellman[167276]:     }
Nov 26 11:47:51 compute-0 modest_hellman[167276]: }
Nov 26 11:47:51 compute-0 systemd[1]: libpod-95e6ec360da99af4b27549f1a4e5bace9636d0b2a2d631fca7a8555db664918a.scope: Deactivated successfully.
Nov 26 11:47:51 compute-0 dbus-broker-launch[733]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Nov 26 11:47:51 compute-0 conmon[167276]: conmon 95e6ec360da99af4b275 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-95e6ec360da99af4b27549f1a4e5bace9636d0b2a2d631fca7a8555db664918a.scope/container/memory.events
Nov 26 11:47:51 compute-0 podman[167263]: 2025-11-26 11:47:51.451225701 +0000 UTC m=+0.843375644 container died 95e6ec360da99af4b27549f1a4e5bace9636d0b2a2d631fca7a8555db664918a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_hellman, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:47:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb985701b626f56b3f981e66734a6ab2827444802e15bcd6e9f8711ffa6946ea-merged.mount: Deactivated successfully.
Nov 26 11:47:51 compute-0 podman[167263]: 2025-11-26 11:47:51.492448611 +0000 UTC m=+0.884598554 container remove 95e6ec360da99af4b27549f1a4e5bace9636d0b2a2d631fca7a8555db664918a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_hellman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 26 11:47:51 compute-0 systemd[1]: libpod-conmon-95e6ec360da99af4b27549f1a4e5bace9636d0b2a2d631fca7a8555db664918a.scope: Deactivated successfully.
Nov 26 11:47:51 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v379: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:47:51 compute-0 sudo[167172]: pam_unix(sudo:session): session closed for user root
Nov 26 11:47:51 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 11:47:51 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:47:51 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 11:47:51 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:47:51 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 9e387618-54d3-47fd-bf4d-8164c5558151 does not exist
Nov 26 11:47:51 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev fdde0f21-d8e0-40bc-b984-ea84e997b874 does not exist
Nov 26 11:47:51 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:47:51 compute-0 sudo[167326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:47:51 compute-0 sudo[167326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:47:51 compute-0 sudo[167326]: pam_unix(sudo:session): session closed for user root
Nov 26 11:47:51 compute-0 sudo[167351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 26 11:47:51 compute-0 sudo[167351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:47:51 compute-0 sudo[167351]: pam_unix(sudo:session): session closed for user root
Nov 26 11:47:52 compute-0 ceph-mon[74928]: pgmap v379: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:47:52 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:47:52 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:47:53 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v380: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:47:54 compute-0 ceph-mon[74928]: pgmap v380: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:47:55 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v381: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:47:56 compute-0 ceph-mon[74928]: pgmap v381: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:47:56 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:47:57 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v382: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:47:58 compute-0 kernel: SELinux:  Converting 2768 SID table entries...
Nov 26 11:47:58 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 26 11:47:58 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 26 11:47:58 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 26 11:47:58 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 26 11:47:58 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 26 11:47:58 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 26 11:47:58 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 26 11:47:58 compute-0 ceph-mon[74928]: pgmap v382: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:47:59 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v383: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:48:00 compute-0 ceph-mon[74928]: pgmap v383: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:48:01 compute-0 ceph-mon[74928]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 11:48:01 compute-0 ceph-mon[74928]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 2010 writes, 8949 keys, 2010 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s
                                           Cumulative WAL: 2010 writes, 2010 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2010 writes, 8949 keys, 2010 commit groups, 1.0 writes per commit group, ingest: 11.68 MB, 0.02 MB/s
                                           Interval WAL: 2010 writes, 2010 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    478.1      0.02              0.01         3    0.006       0      0       0.0       0.0
                                             L6      1/0    6.55 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.6    539.5    471.1      0.03              0.02         2    0.015    7174    729       0.0       0.0
                                            Sum      1/0    6.55 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6    335.0    473.8      0.05              0.04         5    0.009    7174    729       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6    341.0    481.1      0.05              0.04         4    0.012    7174    729       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    539.5    471.1      0.03              0.02         2    0.015    7174    729       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    498.1      0.02              0.01         2    0.009       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     62.5      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.008, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.0 seconds
                                           Interval compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557bd53f31f0#2 capacity: 308.00 MB usage: 574.02 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(34,486.81 KB,0.154352%) FilterBlock(6,28.30 KB,0.00897197%) IndexBlock(6,58.91 KB,0.0186772%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 26 11:48:01 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v384: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:48:01 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:48:02 compute-0 ceph-mon[74928]: pgmap v384: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:48:02 compute-0 dbus-broker-launch[733]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Nov 26 11:48:02 compute-0 podman[167384]: 2025-11-26 11:48:02.623172385 +0000 UTC m=+0.041311539 container health_status 5f401b83ca5f86ed33da7d010b1da1561564f07ce95b2e756c93a36d59e54803 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent)
Nov 26 11:48:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:48:02.981 159928 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 11:48:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:48:02.981 159928 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 11:48:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:48:02.981 159928 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 11:48:03 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v385: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:48:04 compute-0 ceph-mon[74928]: pgmap v385: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:48:05 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v386: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:48:06 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:48:06 compute-0 ceph-mon[74928]: pgmap v386: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:48:07 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v387: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:48:08 compute-0 ceph-mon[74928]: pgmap v387: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:48:09 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v388: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:48:10 compute-0 ceph-mon[74928]: pgmap v388: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:48:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:48:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:48:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:48:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:48:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:48:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:48:11 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v389: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:48:11 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:48:12 compute-0 ceph-mon[74928]: pgmap v389: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:48:13 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v390: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:48:13 compute-0 podman[173293]: 2025-11-26 11:48:13.643228412 +0000 UTC m=+0.073133343 container health_status cfbeb1268b73bea59211ec8c025ee3818e11127880f6c0675b5fbea39bdd577e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 26 11:48:14 compute-0 ceph-mon[74928]: pgmap v390: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:48:15 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v391: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:48:16 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:48:16 compute-0 ceph-mon[74928]: pgmap v391: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:48:17 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v392: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:48:18 compute-0 ceph-mon[74928]: pgmap v392: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:48:19 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v393: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:48:20 compute-0 ceph-mon[74928]: pgmap v393: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:48:21 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v394: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:48:21 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:48:22 compute-0 ceph-mon[74928]: pgmap v394: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:48:23 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v395: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:48:24 compute-0 ceph-mon[74928]: pgmap v395: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:48:25 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v396: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:48:26 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:48:26 compute-0 ceph-mon[74928]: pgmap v396: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:48:27 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v397: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:48:28 compute-0 ceph-mon[74928]: pgmap v397: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:48:29 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v398: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:48:30 compute-0 ceph-mon[74928]: pgmap v398: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:48:31 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v399: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:48:31 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:48:32 compute-0 ceph-mon[74928]: pgmap v399: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:48:33 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v400: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:48:33 compute-0 podman[184225]: 2025-11-26 11:48:33.616323363 +0000 UTC m=+0.038490099 container health_status 5f401b83ca5f86ed33da7d010b1da1561564f07ce95b2e756c93a36d59e54803 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 26 11:48:34 compute-0 kernel: SELinux:  Converting 2769 SID table entries...
Nov 26 11:48:34 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 26 11:48:34 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 26 11:48:34 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 26 11:48:34 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 26 11:48:34 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 26 11:48:34 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 26 11:48:34 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 26 11:48:34 compute-0 groupadd[184250]: group added to /etc/group: name=dnsmasq, GID=991
Nov 26 11:48:34 compute-0 ceph-mon[74928]: pgmap v400: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:48:34 compute-0 groupadd[184250]: group added to /etc/gshadow: name=dnsmasq
Nov 26 11:48:34 compute-0 groupadd[184250]: new group: name=dnsmasq, GID=991
Nov 26 11:48:34 compute-0 useradd[184257]: new user: name=dnsmasq, UID=991, GID=991, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Nov 26 11:48:34 compute-0 dbus-broker-launch[724]: Noticed file-system modification, trigger reload.
Nov 26 11:48:34 compute-0 dbus-broker-launch[733]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Nov 26 11:48:34 compute-0 dbus-broker-launch[724]: Noticed file-system modification, trigger reload.
Nov 26 11:48:35 compute-0 groupadd[184270]: group added to /etc/group: name=clevis, GID=990
Nov 26 11:48:35 compute-0 groupadd[184270]: group added to /etc/gshadow: name=clevis
Nov 26 11:48:35 compute-0 groupadd[184270]: new group: name=clevis, GID=990
Nov 26 11:48:35 compute-0 useradd[184277]: new user: name=clevis, UID=990, GID=990, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Nov 26 11:48:35 compute-0 usermod[184287]: add 'clevis' to group 'tss'
Nov 26 11:48:35 compute-0 usermod[184287]: add 'clevis' to shadow group 'tss'
Nov 26 11:48:35 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v401: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:48:36 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:48:36 compute-0 ceph-mon[74928]: pgmap v401: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:48:36 compute-0 polkitd[43470]: Reloading rules
Nov 26 11:48:36 compute-0 polkitd[43470]: Collecting garbage unconditionally...
Nov 26 11:48:36 compute-0 polkitd[43470]: Loading rules from directory /etc/polkit-1/rules.d
Nov 26 11:48:36 compute-0 polkitd[43470]: Loading rules from directory /usr/share/polkit-1/rules.d
Nov 26 11:48:36 compute-0 polkitd[43470]: Finished loading, compiling and executing 3 rules
Nov 26 11:48:36 compute-0 polkitd[43470]: Reloading rules
Nov 26 11:48:36 compute-0 polkitd[43470]: Collecting garbage unconditionally...
Nov 26 11:48:36 compute-0 polkitd[43470]: Loading rules from directory /etc/polkit-1/rules.d
Nov 26 11:48:36 compute-0 polkitd[43470]: Loading rules from directory /usr/share/polkit-1/rules.d
Nov 26 11:48:36 compute-0 polkitd[43470]: Finished loading, compiling and executing 3 rules
Nov 26 11:48:37 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v402: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:48:37 compute-0 groupadd[184474]: group added to /etc/group: name=ceph, GID=167
Nov 26 11:48:37 compute-0 groupadd[184474]: group added to /etc/gshadow: name=ceph
Nov 26 11:48:37 compute-0 groupadd[184474]: new group: name=ceph, GID=167
Nov 26 11:48:37 compute-0 useradd[184480]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Nov 26 11:48:38 compute-0 ceph-mon[74928]: pgmap v402: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:48:39 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v403: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:48:39 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Nov 26 11:48:39 compute-0 sshd[961]: Received signal 15; terminating.
Nov 26 11:48:39 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Nov 26 11:48:39 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Nov 26 11:48:39 compute-0 systemd[1]: sshd.service: Consumed 1.390s CPU time, read 32.0K from disk, written 0B to disk.
Nov 26 11:48:39 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Nov 26 11:48:39 compute-0 systemd[1]: Stopping sshd-keygen.target...
Nov 26 11:48:39 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 26 11:48:39 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 26 11:48:39 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 26 11:48:39 compute-0 systemd[1]: Reached target sshd-keygen.target.
Nov 26 11:48:39 compute-0 systemd[1]: Starting OpenSSH server daemon...
Nov 26 11:48:39 compute-0 sshd[185105]: Server listening on 0.0.0.0 port 22.
Nov 26 11:48:39 compute-0 sshd[185105]: Server listening on :: port 22.
Nov 26 11:48:39 compute-0 systemd[1]: Started OpenSSH server daemon.
Nov 26 11:48:40 compute-0 ceph-mon[74928]: pgmap v403: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:48:40 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 26 11:48:40 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 26 11:48:41 compute-0 systemd[1]: Reloading.
Nov 26 11:48:41 compute-0 systemd-sysv-generator[185359]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:48:41 compute-0 systemd-rc-local-generator[185354]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:48:41 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 26 11:48:41 compute-0 ceph-mgr[75197]: [balancer INFO root] Optimize plan auto_2025-11-26_11:48:41
Nov 26 11:48:41 compute-0 ceph-mgr[75197]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 11:48:41 compute-0 ceph-mgr[75197]: [balancer INFO root] do_upmap
Nov 26 11:48:41 compute-0 ceph-mgr[75197]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', 'volumes', 'vms', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control', 'backups', 'default.rgw.meta', 'images', 'cephfs.cephfs.meta']
Nov 26 11:48:41 compute-0 ceph-mgr[75197]: [balancer INFO root] prepared 0/10 changes
Nov 26 11:48:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:48:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:48:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:48:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:48:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:48:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:48:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 11:48:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 11:48:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 11:48:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 11:48:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 11:48:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 11:48:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 11:48:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 11:48:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 11:48:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 11:48:41 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v404: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:48:41 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:48:42 compute-0 ceph-mon[74928]: pgmap v404: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:48:42 compute-0 sudo[166316]: pam_unix(sudo:session): session closed for user root
Nov 26 11:48:43 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v405: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:48:43 compute-0 sudo[189690]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ruyfhtifwvtwpfhschxisnonwcsmiuml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157723.1283474-336-11817327586022/AnsiballZ_systemd.py'
Nov 26 11:48:43 compute-0 sudo[189690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:48:43 compute-0 python3.9[189709]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 26 11:48:43 compute-0 systemd[1]: Reloading.
Nov 26 11:48:44 compute-0 systemd-rc-local-generator[190196]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:48:44 compute-0 podman[190070]: 2025-11-26 11:48:44.013553919 +0000 UTC m=+0.104110485 container health_status cfbeb1268b73bea59211ec8c025ee3818e11127880f6c0675b5fbea39bdd577e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 26 11:48:44 compute-0 systemd-sysv-generator[190203]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:48:44 compute-0 sudo[189690]: pam_unix(sudo:session): session closed for user root
Nov 26 11:48:44 compute-0 sudo[191027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glpmklfiflnuqfbklrrdbixpppscijbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157724.3063338-336-258133830627254/AnsiballZ_systemd.py'
Nov 26 11:48:44 compute-0 sudo[191027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:48:44 compute-0 ceph-mon[74928]: pgmap v405: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:48:44 compute-0 python3.9[191050]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 26 11:48:44 compute-0 systemd[1]: Reloading.
Nov 26 11:48:44 compute-0 systemd-sysv-generator[191552]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:48:44 compute-0 systemd-rc-local-generator[191549]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:48:45 compute-0 sudo[191027]: pam_unix(sudo:session): session closed for user root
Nov 26 11:48:45 compute-0 sudo[192384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clwxcpuuatabkkxukfiepldedbpklfun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157725.1484866-336-183871525176069/AnsiballZ_systemd.py'
Nov 26 11:48:45 compute-0 sudo[192384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:48:45 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v406: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:48:45 compute-0 python3.9[192407]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 26 11:48:45 compute-0 systemd[1]: Reloading.
Nov 26 11:48:45 compute-0 systemd-rc-local-generator[192965]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:48:45 compute-0 systemd-sysv-generator[192968]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:48:45 compute-0 sudo[192384]: pam_unix(sudo:session): session closed for user root
Nov 26 11:48:46 compute-0 sudo[193786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhisjuofedzhzwehmlgtiibyfcxyegkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157726.010261-336-155020619696446/AnsiballZ_systemd.py'
Nov 26 11:48:46 compute-0 sudo[193786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:48:46 compute-0 python3.9[193811]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 26 11:48:46 compute-0 systemd[1]: Reloading.
Nov 26 11:48:46 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:48:46 compute-0 systemd-rc-local-generator[194310]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:48:46 compute-0 systemd-sysv-generator[194319]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:48:46 compute-0 ceph-mon[74928]: pgmap v406: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:48:46 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 26 11:48:46 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 26 11:48:46 compute-0 systemd[1]: man-db-cache-update.service: Consumed 7.084s CPU time.
Nov 26 11:48:46 compute-0 sudo[193786]: pam_unix(sudo:session): session closed for user root
Nov 26 11:48:46 compute-0 systemd[1]: run-r467f789992674b7d8e4902f4473b3019.service: Deactivated successfully.
Nov 26 11:48:47 compute-0 sudo[194672]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbvnweceecrlsccpxbncgbiprmkhoesb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157726.8900645-365-24550097044701/AnsiballZ_systemd.py'
Nov 26 11:48:47 compute-0 sudo[194672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:48:47 compute-0 python3.9[194674]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 11:48:47 compute-0 systemd[1]: Reloading.
Nov 26 11:48:47 compute-0 systemd-rc-local-generator[194702]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:48:47 compute-0 systemd-sysv-generator[194705]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:48:47 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v407: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:48:47 compute-0 sudo[194672]: pam_unix(sudo:session): session closed for user root
Nov 26 11:48:47 compute-0 sudo[194862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbizdlegvwatreojfufroooqwpivwikv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157727.7415164-365-65723808026187/AnsiballZ_systemd.py'
Nov 26 11:48:47 compute-0 sudo[194862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:48:48 compute-0 python3.9[194864]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 11:48:48 compute-0 systemd[1]: Reloading.
Nov 26 11:48:48 compute-0 systemd-rc-local-generator[194891]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:48:48 compute-0 systemd-sysv-generator[194894]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:48:48 compute-0 sudo[194862]: pam_unix(sudo:session): session closed for user root
Nov 26 11:48:48 compute-0 ceph-mon[74928]: pgmap v407: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:48:48 compute-0 ceph-mon[74928]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Nov 26 11:48:48 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:48:48.688120) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 26 11:48:48 compute-0 ceph-mon[74928]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Nov 26 11:48:48 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764157728688172, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 839, "num_deletes": 251, "total_data_size": 1148926, "memory_usage": 1166432, "flush_reason": "Manual Compaction"}
Nov 26 11:48:48 compute-0 ceph-mon[74928]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Nov 26 11:48:48 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764157728691576, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 1138601, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8836, "largest_seqno": 9674, "table_properties": {"data_size": 1134395, "index_size": 1922, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 8865, "raw_average_key_size": 18, "raw_value_size": 1125959, "raw_average_value_size": 2365, "num_data_blocks": 89, "num_entries": 476, "num_filter_entries": 476, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764157651, "oldest_key_time": 1764157651, "file_creation_time": 1764157728, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "363c2a1d-8d28-40b7-a8ff-7233f1c9b7d5", "db_session_id": "CJT49RLFB1C6KNYXG0ER", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Nov 26 11:48:48 compute-0 ceph-mon[74928]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 3475 microseconds, and 2570 cpu microseconds.
Nov 26 11:48:48 compute-0 ceph-mon[74928]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 11:48:48 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:48:48.691603) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 1138601 bytes OK
Nov 26 11:48:48 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:48:48.691615) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Nov 26 11:48:48 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:48:48.691971) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Nov 26 11:48:48 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:48:48.691980) EVENT_LOG_v1 {"time_micros": 1764157728691977, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 26 11:48:48 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:48:48.691991) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 26 11:48:48 compute-0 ceph-mon[74928]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 1144794, prev total WAL file size 1144794, number of live WAL files 2.
Nov 26 11:48:48 compute-0 ceph-mon[74928]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 11:48:48 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:48:48.692338) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Nov 26 11:48:48 compute-0 ceph-mon[74928]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 26 11:48:48 compute-0 ceph-mon[74928]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(1111KB)], [23(6709KB)]
Nov 26 11:48:48 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764157728692361, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 8009371, "oldest_snapshot_seqno": -1}
Nov 26 11:48:48 compute-0 ceph-mon[74928]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3282 keys, 6215750 bytes, temperature: kUnknown
Nov 26 11:48:48 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764157728705305, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 6215750, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6191775, "index_size": 14650, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8261, "raw_key_size": 79579, "raw_average_key_size": 24, "raw_value_size": 6130403, "raw_average_value_size": 1867, "num_data_blocks": 640, "num_entries": 3282, "num_filter_entries": 3282, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764157079, "oldest_key_time": 0, "file_creation_time": 1764157728, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "363c2a1d-8d28-40b7-a8ff-7233f1c9b7d5", "db_session_id": "CJT49RLFB1C6KNYXG0ER", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Nov 26 11:48:48 compute-0 ceph-mon[74928]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 11:48:48 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:48:48.705580) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 6215750 bytes
Nov 26 11:48:48 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:48:48.706078) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 611.4 rd, 474.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 6.6 +0.0 blob) out(5.9 +0.0 blob), read-write-amplify(12.5) write-amplify(5.5) OK, records in: 3796, records dropped: 514 output_compression: NoCompression
Nov 26 11:48:48 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:48:48.706093) EVENT_LOG_v1 {"time_micros": 1764157728706085, "job": 8, "event": "compaction_finished", "compaction_time_micros": 13101, "compaction_time_cpu_micros": 10428, "output_level": 6, "num_output_files": 1, "total_output_size": 6215750, "num_input_records": 3796, "num_output_records": 3282, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 26 11:48:48 compute-0 ceph-mon[74928]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 11:48:48 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764157728706573, "job": 8, "event": "table_file_deletion", "file_number": 25}
Nov 26 11:48:48 compute-0 ceph-mon[74928]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 11:48:48 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764157728707502, "job": 8, "event": "table_file_deletion", "file_number": 23}
Nov 26 11:48:48 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:48:48.692303) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 11:48:48 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:48:48.707596) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 11:48:48 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:48:48.707599) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 11:48:48 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:48:48.707601) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 11:48:48 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:48:48.707602) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 11:48:48 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:48:48.707603) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 11:48:48 compute-0 sudo[195051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-essjqfbgrirogcrmpztwcgehvvctipif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157728.576902-365-250681049709463/AnsiballZ_systemd.py'
Nov 26 11:48:48 compute-0 sudo[195051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:48:49 compute-0 python3.9[195053]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 11:48:49 compute-0 systemd[1]: Reloading.
Nov 26 11:48:49 compute-0 systemd-rc-local-generator[195077]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:48:49 compute-0 systemd-sysv-generator[195080]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:48:49 compute-0 sudo[195051]: pam_unix(sudo:session): session closed for user root
Nov 26 11:48:49 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v408: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:48:49 compute-0 sudo[195241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkatrcoegvrkcofexckwijpxbqifquso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157729.410361-365-182371398353284/AnsiballZ_systemd.py'
Nov 26 11:48:49 compute-0 sudo[195241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:48:49 compute-0 python3.9[195243]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 11:48:49 compute-0 sudo[195241]: pam_unix(sudo:session): session closed for user root
Nov 26 11:48:50 compute-0 sudo[195396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uovaztoxwcgbjepxgqghmovyawfoqprp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157729.9981081-365-55178230507698/AnsiballZ_systemd.py'
Nov 26 11:48:50 compute-0 sudo[195396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:48:50 compute-0 python3.9[195398]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 11:48:50 compute-0 systemd[1]: Reloading.
Nov 26 11:48:50 compute-0 systemd-rc-local-generator[195424]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:48:50 compute-0 systemd-sysv-generator[195429]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:48:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 11:48:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:48:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 11:48:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:48:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:48:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:48:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:48:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:48:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:48:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:48:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:48:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:48:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 26 11:48:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:48:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:48:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:48:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 11:48:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:48:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 11:48:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:48:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:48:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:48:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 11:48:50 compute-0 ceph-mon[74928]: pgmap v408: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:48:50 compute-0 sudo[195396]: pam_unix(sudo:session): session closed for user root
Nov 26 11:48:51 compute-0 sudo[195585]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuwbpbycqxfyzklqaxncjufsixsltijk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157730.9601297-401-109471610761263/AnsiballZ_systemd.py'
Nov 26 11:48:51 compute-0 sudo[195585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:48:51 compute-0 python3.9[195587]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 26 11:48:51 compute-0 systemd[1]: Reloading.
Nov 26 11:48:51 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v409: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:48:51 compute-0 systemd-sysv-generator[195617]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:48:51 compute-0 systemd-rc-local-generator[195614]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:48:51 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:48:51 compute-0 sudo[195625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:48:51 compute-0 sudo[195625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:48:51 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Nov 26 11:48:51 compute-0 sudo[195625]: pam_unix(sudo:session): session closed for user root
Nov 26 11:48:51 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Nov 26 11:48:51 compute-0 sudo[195585]: pam_unix(sudo:session): session closed for user root
Nov 26 11:48:51 compute-0 sudo[195653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:48:51 compute-0 sudo[195653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:48:51 compute-0 sudo[195653]: pam_unix(sudo:session): session closed for user root
Nov 26 11:48:51 compute-0 sudo[195679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:48:51 compute-0 sudo[195679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:48:51 compute-0 sudo[195679]: pam_unix(sudo:session): session closed for user root
Nov 26 11:48:51 compute-0 sudo[195727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 26 11:48:51 compute-0 sudo[195727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:48:52 compute-0 sudo[195727]: pam_unix(sudo:session): session closed for user root
Nov 26 11:48:52 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 11:48:52 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:48:52 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 11:48:52 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:48:52 compute-0 sudo[195868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:48:52 compute-0 sudo[195868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:48:52 compute-0 sudo[195868]: pam_unix(sudo:session): session closed for user root
Nov 26 11:48:52 compute-0 sudo[195924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inqobbvqaqryewbjqhdvktmmekqmnncm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157731.8731327-409-23878608263387/AnsiballZ_systemd.py'
Nov 26 11:48:52 compute-0 sudo[195924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:48:52 compute-0 sudo[195918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:48:52 compute-0 sudo[195918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:48:52 compute-0 sudo[195918]: pam_unix(sudo:session): session closed for user root
Nov 26 11:48:52 compute-0 sudo[195948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:48:52 compute-0 sudo[195948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:48:52 compute-0 sudo[195948]: pam_unix(sudo:session): session closed for user root
Nov 26 11:48:52 compute-0 sudo[195973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 26 11:48:52 compute-0 sudo[195973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:48:52 compute-0 python3.9[195940]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 11:48:52 compute-0 sudo[195924]: pam_unix(sudo:session): session closed for user root
Nov 26 11:48:52 compute-0 sudo[195973]: pam_unix(sudo:session): session closed for user root
Nov 26 11:48:52 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 11:48:52 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:48:52 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 11:48:52 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 11:48:52 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 11:48:52 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:48:52 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev c21cd355-e984-4a11-a6e1-540774dcee89 does not exist
Nov 26 11:48:52 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev f6f02c9c-b7af-48d1-9a44-5b908397f647 does not exist
Nov 26 11:48:52 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 177eccbf-5410-43f5-8630-02aa7e1c6fe9 does not exist
Nov 26 11:48:52 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 11:48:52 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 11:48:52 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 11:48:52 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 11:48:52 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 11:48:52 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:48:52 compute-0 sudo[196106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:48:52 compute-0 sudo[196106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:48:52 compute-0 sudo[196106]: pam_unix(sudo:session): session closed for user root
Nov 26 11:48:52 compute-0 sudo[196154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:48:52 compute-0 sudo[196154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:48:52 compute-0 sudo[196154]: pam_unix(sudo:session): session closed for user root
Nov 26 11:48:52 compute-0 sudo[196201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:48:52 compute-0 sudo[196201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:48:52 compute-0 sudo[196201]: pam_unix(sudo:session): session closed for user root
Nov 26 11:48:52 compute-0 ceph-mon[74928]: pgmap v409: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:48:52 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:48:52 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:48:52 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:48:52 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 11:48:52 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:48:52 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 11:48:52 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 11:48:52 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:48:52 compute-0 sudo[196267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tobfzmuosfjwqyrpwhhmsnhbwztxxpjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157732.476844-409-254068656839499/AnsiballZ_systemd.py'
Nov 26 11:48:52 compute-0 sudo[196267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:48:52 compute-0 sudo[196243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 26 11:48:52 compute-0 sudo[196243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:48:52 compute-0 podman[196312]: 2025-11-26 11:48:52.93191513 +0000 UTC m=+0.025660331 container create 556e712970de4ffdcd3d3ecd1a0e4a0517e12b592a096c82ced71d2a62703724 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_blackburn, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:48:52 compute-0 python3.9[196279]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 11:48:52 compute-0 systemd[1]: Started libpod-conmon-556e712970de4ffdcd3d3ecd1a0e4a0517e12b592a096c82ced71d2a62703724.scope.
Nov 26 11:48:52 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:48:52 compute-0 podman[196312]: 2025-11-26 11:48:52.991068189 +0000 UTC m=+0.084813391 container init 556e712970de4ffdcd3d3ecd1a0e4a0517e12b592a096c82ced71d2a62703724 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_blackburn, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 26 11:48:52 compute-0 podman[196312]: 2025-11-26 11:48:52.995957436 +0000 UTC m=+0.089702636 container start 556e712970de4ffdcd3d3ecd1a0e4a0517e12b592a096c82ced71d2a62703724 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_blackburn, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:48:52 compute-0 hopeful_blackburn[196326]: 167 167
Nov 26 11:48:53 compute-0 systemd[1]: libpod-556e712970de4ffdcd3d3ecd1a0e4a0517e12b592a096c82ced71d2a62703724.scope: Deactivated successfully.
Nov 26 11:48:53 compute-0 podman[196312]: 2025-11-26 11:48:52.999941754 +0000 UTC m=+0.093686955 container attach 556e712970de4ffdcd3d3ecd1a0e4a0517e12b592a096c82ced71d2a62703724 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:48:53 compute-0 podman[196312]: 2025-11-26 11:48:53.000994029 +0000 UTC m=+0.094739231 container died 556e712970de4ffdcd3d3ecd1a0e4a0517e12b592a096c82ced71d2a62703724 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_blackburn, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 26 11:48:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-30872c1bf473794c7357aab98e4d738e37623599853ce06d59381e25b7ef60d8-merged.mount: Deactivated successfully.
Nov 26 11:48:53 compute-0 podman[196312]: 2025-11-26 11:48:53.018400906 +0000 UTC m=+0.112146106 container remove 556e712970de4ffdcd3d3ecd1a0e4a0517e12b592a096c82ced71d2a62703724 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 26 11:48:53 compute-0 podman[196312]: 2025-11-26 11:48:52.921112195 +0000 UTC m=+0.014857417 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:48:53 compute-0 sudo[196267]: pam_unix(sudo:session): session closed for user root
Nov 26 11:48:53 compute-0 systemd[1]: libpod-conmon-556e712970de4ffdcd3d3ecd1a0e4a0517e12b592a096c82ced71d2a62703724.scope: Deactivated successfully.
Nov 26 11:48:53 compute-0 podman[196392]: 2025-11-26 11:48:53.136152648 +0000 UTC m=+0.026198446 container create 0187b838037442260a363c1493248b0575058ff71b8adfd8a9a96a63cf28e8d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 26 11:48:53 compute-0 systemd[1]: Started libpod-conmon-0187b838037442260a363c1493248b0575058ff71b8adfd8a9a96a63cf28e8d1.scope.
Nov 26 11:48:53 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:48:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e371f4940abed2a9ae6aff30f653bbc3969ba2f8fea891a1a67522a563ca52c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:48:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e371f4940abed2a9ae6aff30f653bbc3969ba2f8fea891a1a67522a563ca52c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:48:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e371f4940abed2a9ae6aff30f653bbc3969ba2f8fea891a1a67522a563ca52c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:48:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e371f4940abed2a9ae6aff30f653bbc3969ba2f8fea891a1a67522a563ca52c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:48:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e371f4940abed2a9ae6aff30f653bbc3969ba2f8fea891a1a67522a563ca52c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 11:48:53 compute-0 podman[196392]: 2025-11-26 11:48:53.194767352 +0000 UTC m=+0.084813150 container init 0187b838037442260a363c1493248b0575058ff71b8adfd8a9a96a63cf28e8d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_chandrasekhar, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:48:53 compute-0 podman[196392]: 2025-11-26 11:48:53.201225598 +0000 UTC m=+0.091271386 container start 0187b838037442260a363c1493248b0575058ff71b8adfd8a9a96a63cf28e8d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_chandrasekhar, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 26 11:48:53 compute-0 podman[196392]: 2025-11-26 11:48:53.204462686 +0000 UTC m=+0.094508494 container attach 0187b838037442260a363c1493248b0575058ff71b8adfd8a9a96a63cf28e8d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:48:53 compute-0 podman[196392]: 2025-11-26 11:48:53.125604083 +0000 UTC m=+0.015649891 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:48:53 compute-0 sudo[196518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkikmmufkebyjzcsdouzbjwhqohbxqwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157733.1135888-409-196096550081729/AnsiballZ_systemd.py'
Nov 26 11:48:53 compute-0 sudo[196518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:48:53 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v410: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:48:53 compute-0 python3.9[196520]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 11:48:53 compute-0 sudo[196518]: pam_unix(sudo:session): session closed for user root
Nov 26 11:48:53 compute-0 sudo[196687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmrautxcxdwsxffngaqmxubxmochieal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157733.7221904-409-111822363620195/AnsiballZ_systemd.py'
Nov 26 11:48:53 compute-0 sudo[196687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:48:54 compute-0 admiring_chandrasekhar[196440]: --> passed data devices: 0 physical, 3 LVM
Nov 26 11:48:54 compute-0 admiring_chandrasekhar[196440]: --> relative data size: 1.0
Nov 26 11:48:54 compute-0 admiring_chandrasekhar[196440]: --> All data devices are unavailable
Nov 26 11:48:54 compute-0 systemd[1]: libpod-0187b838037442260a363c1493248b0575058ff71b8adfd8a9a96a63cf28e8d1.scope: Deactivated successfully.
Nov 26 11:48:54 compute-0 podman[196392]: 2025-11-26 11:48:54.036542278 +0000 UTC m=+0.926588066 container died 0187b838037442260a363c1493248b0575058ff71b8adfd8a9a96a63cf28e8d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_chandrasekhar, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 26 11:48:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e371f4940abed2a9ae6aff30f653bbc3969ba2f8fea891a1a67522a563ca52c-merged.mount: Deactivated successfully.
Nov 26 11:48:54 compute-0 podman[196392]: 2025-11-26 11:48:54.067980069 +0000 UTC m=+0.958025857 container remove 0187b838037442260a363c1493248b0575058ff71b8adfd8a9a96a63cf28e8d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 26 11:48:54 compute-0 systemd[1]: libpod-conmon-0187b838037442260a363c1493248b0575058ff71b8adfd8a9a96a63cf28e8d1.scope: Deactivated successfully.
Nov 26 11:48:54 compute-0 sudo[196243]: pam_unix(sudo:session): session closed for user root
Nov 26 11:48:54 compute-0 sudo[196710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:48:54 compute-0 sudo[196710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:48:54 compute-0 sudo[196710]: pam_unix(sudo:session): session closed for user root
Nov 26 11:48:54 compute-0 sudo[196735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:48:54 compute-0 sudo[196735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:48:54 compute-0 sudo[196735]: pam_unix(sudo:session): session closed for user root
Nov 26 11:48:54 compute-0 python3.9[196690]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 11:48:54 compute-0 sudo[196760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:48:54 compute-0 sudo[196760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:48:54 compute-0 sudo[196760]: pam_unix(sudo:session): session closed for user root
Nov 26 11:48:54 compute-0 sudo[196687]: pam_unix(sudo:session): session closed for user root
Nov 26 11:48:54 compute-0 sudo[196788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -- lvm list --format json
Nov 26 11:48:54 compute-0 sudo[196788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:48:54 compute-0 podman[196950]: 2025-11-26 11:48:54.507813007 +0000 UTC m=+0.032406530 container create 6bdb3309ac27a54ac150ea3fb772eeecc690d7a4d27ced4e68ef8d716243291f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_carson, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:48:54 compute-0 systemd[1]: Started libpod-conmon-6bdb3309ac27a54ac150ea3fb772eeecc690d7a4d27ced4e68ef8d716243291f.scope.
Nov 26 11:48:54 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:48:54 compute-0 sudo[197005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvordmoukuylvmzlzyylcgyblbcjsunt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157734.3323877-409-209693874045120/AnsiballZ_systemd.py'
Nov 26 11:48:54 compute-0 sudo[197005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:48:54 compute-0 podman[196950]: 2025-11-26 11:48:54.566318394 +0000 UTC m=+0.090911928 container init 6bdb3309ac27a54ac150ea3fb772eeecc690d7a4d27ced4e68ef8d716243291f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_carson, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 26 11:48:54 compute-0 podman[196950]: 2025-11-26 11:48:54.571201528 +0000 UTC m=+0.095795052 container start 6bdb3309ac27a54ac150ea3fb772eeecc690d7a4d27ced4e68ef8d716243291f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_carson, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 26 11:48:54 compute-0 podman[196950]: 2025-11-26 11:48:54.572503946 +0000 UTC m=+0.097097470 container attach 6bdb3309ac27a54ac150ea3fb772eeecc690d7a4d27ced4e68ef8d716243291f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 26 11:48:54 compute-0 ecstatic_carson[197006]: 167 167
Nov 26 11:48:54 compute-0 systemd[1]: libpod-6bdb3309ac27a54ac150ea3fb772eeecc690d7a4d27ced4e68ef8d716243291f.scope: Deactivated successfully.
Nov 26 11:48:54 compute-0 podman[196950]: 2025-11-26 11:48:54.576317612 +0000 UTC m=+0.100911136 container died 6bdb3309ac27a54ac150ea3fb772eeecc690d7a4d27ced4e68ef8d716243291f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 26 11:48:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-fcc8ba697a2e23f042f2f13624087d33342dc9a1e826f1dc47a9566ced4ed436-merged.mount: Deactivated successfully.
Nov 26 11:48:54 compute-0 podman[196950]: 2025-11-26 11:48:54.496067243 +0000 UTC m=+0.020660787 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:48:54 compute-0 podman[196950]: 2025-11-26 11:48:54.596376382 +0000 UTC m=+0.120969906 container remove 6bdb3309ac27a54ac150ea3fb772eeecc690d7a4d27ced4e68ef8d716243291f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_carson, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:48:54 compute-0 systemd[1]: libpod-conmon-6bdb3309ac27a54ac150ea3fb772eeecc690d7a4d27ced4e68ef8d716243291f.scope: Deactivated successfully.
Nov 26 11:48:54 compute-0 ceph-mon[74928]: pgmap v410: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:48:54 compute-0 podman[197030]: 2025-11-26 11:48:54.720138426 +0000 UTC m=+0.032546875 container create 3dd9629208e05d21608b0c94cf074ca3857e22d658eaf41b8a01a086d81f8094 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_hugle, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:48:54 compute-0 systemd[1]: Started libpod-conmon-3dd9629208e05d21608b0c94cf074ca3857e22d658eaf41b8a01a086d81f8094.scope.
Nov 26 11:48:54 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:48:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37b241047f4e5486c7b54aa181619ac038c195725156fea7105d6bc2fe21f6cb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:48:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37b241047f4e5486c7b54aa181619ac038c195725156fea7105d6bc2fe21f6cb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:48:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37b241047f4e5486c7b54aa181619ac038c195725156fea7105d6bc2fe21f6cb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:48:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37b241047f4e5486c7b54aa181619ac038c195725156fea7105d6bc2fe21f6cb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:48:54 compute-0 podman[197030]: 2025-11-26 11:48:54.771522015 +0000 UTC m=+0.083930474 container init 3dd9629208e05d21608b0c94cf074ca3857e22d658eaf41b8a01a086d81f8094 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_hugle, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 26 11:48:54 compute-0 podman[197030]: 2025-11-26 11:48:54.779472617 +0000 UTC m=+0.091881077 container start 3dd9629208e05d21608b0c94cf074ca3857e22d658eaf41b8a01a086d81f8094 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 26 11:48:54 compute-0 podman[197030]: 2025-11-26 11:48:54.783347839 +0000 UTC m=+0.095756288 container attach 3dd9629208e05d21608b0c94cf074ca3857e22d658eaf41b8a01a086d81f8094 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 26 11:48:54 compute-0 podman[197030]: 2025-11-26 11:48:54.708976294 +0000 UTC m=+0.021384763 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:48:54 compute-0 python3.9[197010]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 11:48:54 compute-0 sudo[197005]: pam_unix(sudo:session): session closed for user root
Nov 26 11:48:55 compute-0 sudo[197201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bodeaffgweetfxjfitrzqswllyiyoswo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157734.985533-409-74751018354784/AnsiballZ_systemd.py'
Nov 26 11:48:55 compute-0 sudo[197201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:48:55 compute-0 boring_hugle[197044]: {
Nov 26 11:48:55 compute-0 boring_hugle[197044]:     "0": [
Nov 26 11:48:55 compute-0 boring_hugle[197044]:         {
Nov 26 11:48:55 compute-0 boring_hugle[197044]:             "devices": [
Nov 26 11:48:55 compute-0 boring_hugle[197044]:                 "/dev/loop3"
Nov 26 11:48:55 compute-0 boring_hugle[197044]:             ],
Nov 26 11:48:55 compute-0 boring_hugle[197044]:             "lv_name": "ceph_lv0",
Nov 26 11:48:55 compute-0 boring_hugle[197044]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:48:55 compute-0 boring_hugle[197044]:             "lv_size": "21470642176",
Nov 26 11:48:55 compute-0 boring_hugle[197044]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a9ad59a0-aa2e-4d92-b571-519d2d145b6a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:48:55 compute-0 boring_hugle[197044]:             "lv_uuid": "Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ",
Nov 26 11:48:55 compute-0 boring_hugle[197044]:             "name": "ceph_lv0",
Nov 26 11:48:55 compute-0 boring_hugle[197044]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:48:55 compute-0 boring_hugle[197044]:             "tags": {
Nov 26 11:48:55 compute-0 boring_hugle[197044]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:48:55 compute-0 boring_hugle[197044]:                 "ceph.block_uuid": "Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ",
Nov 26 11:48:55 compute-0 boring_hugle[197044]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:48:55 compute-0 boring_hugle[197044]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:48:55 compute-0 boring_hugle[197044]:                 "ceph.cluster_name": "ceph",
Nov 26 11:48:55 compute-0 boring_hugle[197044]:                 "ceph.crush_device_class": "",
Nov 26 11:48:55 compute-0 boring_hugle[197044]:                 "ceph.encrypted": "0",
Nov 26 11:48:55 compute-0 boring_hugle[197044]:                 "ceph.osd_fsid": "a9ad59a0-aa2e-4d92-b571-519d2d145b6a",
Nov 26 11:48:55 compute-0 boring_hugle[197044]:                 "ceph.osd_id": "0",
Nov 26 11:48:55 compute-0 boring_hugle[197044]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:48:55 compute-0 boring_hugle[197044]:                 "ceph.type": "block",
Nov 26 11:48:55 compute-0 boring_hugle[197044]:                 "ceph.vdo": "0"
Nov 26 11:48:55 compute-0 boring_hugle[197044]:             },
Nov 26 11:48:55 compute-0 boring_hugle[197044]:             "type": "block",
Nov 26 11:48:55 compute-0 boring_hugle[197044]:             "vg_name": "ceph_vg0"
Nov 26 11:48:55 compute-0 boring_hugle[197044]:         }
Nov 26 11:48:55 compute-0 boring_hugle[197044]:     ],
Nov 26 11:48:55 compute-0 boring_hugle[197044]:     "1": [
Nov 26 11:48:55 compute-0 boring_hugle[197044]:         {
Nov 26 11:48:55 compute-0 boring_hugle[197044]:             "devices": [
Nov 26 11:48:55 compute-0 boring_hugle[197044]:                 "/dev/loop4"
Nov 26 11:48:55 compute-0 boring_hugle[197044]:             ],
Nov 26 11:48:55 compute-0 boring_hugle[197044]:             "lv_name": "ceph_lv1",
Nov 26 11:48:55 compute-0 boring_hugle[197044]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:48:55 compute-0 boring_hugle[197044]:             "lv_size": "21470642176",
Nov 26 11:48:55 compute-0 boring_hugle[197044]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2627095b-eef8-4027-bfef-68bf7cb6801f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:48:55 compute-0 boring_hugle[197044]:             "lv_uuid": "T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS",
Nov 26 11:48:55 compute-0 boring_hugle[197044]:             "name": "ceph_lv1",
Nov 26 11:48:55 compute-0 boring_hugle[197044]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:48:55 compute-0 boring_hugle[197044]:             "tags": {
Nov 26 11:48:55 compute-0 boring_hugle[197044]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:48:55 compute-0 boring_hugle[197044]:                 "ceph.block_uuid": "T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS",
Nov 26 11:48:55 compute-0 boring_hugle[197044]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:48:55 compute-0 boring_hugle[197044]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:48:55 compute-0 boring_hugle[197044]:                 "ceph.cluster_name": "ceph",
Nov 26 11:48:55 compute-0 boring_hugle[197044]:                 "ceph.crush_device_class": "",
Nov 26 11:48:55 compute-0 boring_hugle[197044]:                 "ceph.encrypted": "0",
Nov 26 11:48:55 compute-0 boring_hugle[197044]:                 "ceph.osd_fsid": "2627095b-eef8-4027-bfef-68bf7cb6801f",
Nov 26 11:48:55 compute-0 boring_hugle[197044]:                 "ceph.osd_id": "1",
Nov 26 11:48:55 compute-0 boring_hugle[197044]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:48:55 compute-0 boring_hugle[197044]:                 "ceph.type": "block",
Nov 26 11:48:55 compute-0 boring_hugle[197044]:                 "ceph.vdo": "0"
Nov 26 11:48:55 compute-0 boring_hugle[197044]:             },
Nov 26 11:48:55 compute-0 boring_hugle[197044]:             "type": "block",
Nov 26 11:48:55 compute-0 boring_hugle[197044]:             "vg_name": "ceph_vg1"
Nov 26 11:48:55 compute-0 boring_hugle[197044]:         }
Nov 26 11:48:55 compute-0 boring_hugle[197044]:     ],
Nov 26 11:48:55 compute-0 boring_hugle[197044]:     "2": [
Nov 26 11:48:55 compute-0 boring_hugle[197044]:         {
Nov 26 11:48:55 compute-0 boring_hugle[197044]:             "devices": [
Nov 26 11:48:55 compute-0 boring_hugle[197044]:                 "/dev/loop5"
Nov 26 11:48:55 compute-0 boring_hugle[197044]:             ],
Nov 26 11:48:55 compute-0 boring_hugle[197044]:             "lv_name": "ceph_lv2",
Nov 26 11:48:55 compute-0 boring_hugle[197044]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:48:55 compute-0 boring_hugle[197044]:             "lv_size": "21470642176",
Nov 26 11:48:55 compute-0 boring_hugle[197044]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d56156fb-7361-4bef-b06b-1320109b4323,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:48:55 compute-0 boring_hugle[197044]:             "lv_uuid": "PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF",
Nov 26 11:48:55 compute-0 boring_hugle[197044]:             "name": "ceph_lv2",
Nov 26 11:48:55 compute-0 boring_hugle[197044]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:48:55 compute-0 boring_hugle[197044]:             "tags": {
Nov 26 11:48:55 compute-0 boring_hugle[197044]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:48:55 compute-0 boring_hugle[197044]:                 "ceph.block_uuid": "PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF",
Nov 26 11:48:55 compute-0 boring_hugle[197044]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:48:55 compute-0 boring_hugle[197044]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:48:55 compute-0 boring_hugle[197044]:                 "ceph.cluster_name": "ceph",
Nov 26 11:48:55 compute-0 boring_hugle[197044]:                 "ceph.crush_device_class": "",
Nov 26 11:48:55 compute-0 boring_hugle[197044]:                 "ceph.encrypted": "0",
Nov 26 11:48:55 compute-0 boring_hugle[197044]:                 "ceph.osd_fsid": "d56156fb-7361-4bef-b06b-1320109b4323",
Nov 26 11:48:55 compute-0 boring_hugle[197044]:                 "ceph.osd_id": "2",
Nov 26 11:48:55 compute-0 boring_hugle[197044]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:48:55 compute-0 boring_hugle[197044]:                 "ceph.type": "block",
Nov 26 11:48:55 compute-0 boring_hugle[197044]:                 "ceph.vdo": "0"
Nov 26 11:48:55 compute-0 boring_hugle[197044]:             },
Nov 26 11:48:55 compute-0 boring_hugle[197044]:             "type": "block",
Nov 26 11:48:55 compute-0 boring_hugle[197044]:             "vg_name": "ceph_vg2"
Nov 26 11:48:55 compute-0 boring_hugle[197044]:         }
Nov 26 11:48:55 compute-0 boring_hugle[197044]:     ]
Nov 26 11:48:55 compute-0 boring_hugle[197044]: }
Nov 26 11:48:55 compute-0 systemd[1]: libpod-3dd9629208e05d21608b0c94cf074ca3857e22d658eaf41b8a01a086d81f8094.scope: Deactivated successfully.
Nov 26 11:48:55 compute-0 podman[197030]: 2025-11-26 11:48:55.419971313 +0000 UTC m=+0.732379762 container died 3dd9629208e05d21608b0c94cf074ca3857e22d658eaf41b8a01a086d81f8094 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 26 11:48:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-37b241047f4e5486c7b54aa181619ac038c195725156fea7105d6bc2fe21f6cb-merged.mount: Deactivated successfully.
Nov 26 11:48:55 compute-0 python3.9[197203]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 11:48:55 compute-0 podman[197030]: 2025-11-26 11:48:55.454882679 +0000 UTC m=+0.767291129 container remove 3dd9629208e05d21608b0c94cf074ca3857e22d658eaf41b8a01a086d81f8094 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_hugle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 26 11:48:55 compute-0 systemd[1]: libpod-conmon-3dd9629208e05d21608b0c94cf074ca3857e22d658eaf41b8a01a086d81f8094.scope: Deactivated successfully.
Nov 26 11:48:55 compute-0 sudo[196788]: pam_unix(sudo:session): session closed for user root
Nov 26 11:48:55 compute-0 sudo[197201]: pam_unix(sudo:session): session closed for user root
Nov 26 11:48:55 compute-0 sudo[197221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:48:55 compute-0 sudo[197221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:48:55 compute-0 sudo[197221]: pam_unix(sudo:session): session closed for user root
Nov 26 11:48:55 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v411: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:48:55 compute-0 sudo[197255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:48:55 compute-0 sudo[197255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:48:55 compute-0 sudo[197255]: pam_unix(sudo:session): session closed for user root
Nov 26 11:48:55 compute-0 sudo[197296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:48:55 compute-0 sudo[197296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:48:55 compute-0 sudo[197296]: pam_unix(sudo:session): session closed for user root
Nov 26 11:48:55 compute-0 sudo[197348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -- raw list --format json
Nov 26 11:48:55 compute-0 sudo[197348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:48:55 compute-0 sudo[197487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xyggoxxcqxdavvzducrxnctphwqilpda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157735.6038318-409-142059934628685/AnsiballZ_systemd.py'
Nov 26 11:48:55 compute-0 sudo[197487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:48:55 compute-0 podman[197505]: 2025-11-26 11:48:55.897719876 +0000 UTC m=+0.028291776 container create b353987d6b54616f748f31371835b656ed5e8237bf1881aefa9381dd2cc15569 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_cray, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:48:55 compute-0 systemd[1]: Started libpod-conmon-b353987d6b54616f748f31371835b656ed5e8237bf1881aefa9381dd2cc15569.scope.
Nov 26 11:48:55 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:48:55 compute-0 podman[197505]: 2025-11-26 11:48:55.940772835 +0000 UTC m=+0.071344735 container init b353987d6b54616f748f31371835b656ed5e8237bf1881aefa9381dd2cc15569 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_cray, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:48:55 compute-0 podman[197505]: 2025-11-26 11:48:55.945346195 +0000 UTC m=+0.075918095 container start b353987d6b54616f748f31371835b656ed5e8237bf1881aefa9381dd2cc15569 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_cray, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 26 11:48:55 compute-0 podman[197505]: 2025-11-26 11:48:55.946383001 +0000 UTC m=+0.076954901 container attach b353987d6b54616f748f31371835b656ed5e8237bf1881aefa9381dd2cc15569 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_cray, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 26 11:48:55 compute-0 charming_cray[197518]: 167 167
Nov 26 11:48:55 compute-0 systemd[1]: libpod-b353987d6b54616f748f31371835b656ed5e8237bf1881aefa9381dd2cc15569.scope: Deactivated successfully.
Nov 26 11:48:55 compute-0 conmon[197518]: conmon b353987d6b54616f748f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b353987d6b54616f748f31371835b656ed5e8237bf1881aefa9381dd2cc15569.scope/container/memory.events
Nov 26 11:48:55 compute-0 podman[197505]: 2025-11-26 11:48:55.949722843 +0000 UTC m=+0.080294744 container died b353987d6b54616f748f31371835b656ed5e8237bf1881aefa9381dd2cc15569 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_cray, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:48:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae5bb5559d17b3155ded68c8b9bb886591d85fbd95308d2c03d4c58f2f448918-merged.mount: Deactivated successfully.
Nov 26 11:48:55 compute-0 podman[197505]: 2025-11-26 11:48:55.972024385 +0000 UTC m=+0.102596286 container remove b353987d6b54616f748f31371835b656ed5e8237bf1881aefa9381dd2cc15569 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 26 11:48:55 compute-0 podman[197505]: 2025-11-26 11:48:55.885348872 +0000 UTC m=+0.015920772 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:48:55 compute-0 systemd[1]: libpod-conmon-b353987d6b54616f748f31371835b656ed5e8237bf1881aefa9381dd2cc15569.scope: Deactivated successfully.
Nov 26 11:48:56 compute-0 python3.9[197492]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 11:48:56 compute-0 podman[197541]: 2025-11-26 11:48:56.098343234 +0000 UTC m=+0.034753319 container create 5f848337c76dcd35f8bca3e82c79f64bdc9435e3a6f42fef241d2026d1a92a8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:48:56 compute-0 sudo[197487]: pam_unix(sudo:session): session closed for user root
Nov 26 11:48:56 compute-0 systemd[1]: Started libpod-conmon-5f848337c76dcd35f8bca3e82c79f64bdc9435e3a6f42fef241d2026d1a92a8a.scope.
Nov 26 11:48:56 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:48:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/256bbde8491fb931df0e78e5980307efb53fb77691cc140cf58b78198d6e7b1f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:48:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/256bbde8491fb931df0e78e5980307efb53fb77691cc140cf58b78198d6e7b1f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:48:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/256bbde8491fb931df0e78e5980307efb53fb77691cc140cf58b78198d6e7b1f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:48:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/256bbde8491fb931df0e78e5980307efb53fb77691cc140cf58b78198d6e7b1f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:48:56 compute-0 podman[197541]: 2025-11-26 11:48:56.162027934 +0000 UTC m=+0.098438030 container init 5f848337c76dcd35f8bca3e82c79f64bdc9435e3a6f42fef241d2026d1a92a8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_vaughan, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 26 11:48:56 compute-0 podman[197541]: 2025-11-26 11:48:56.167382408 +0000 UTC m=+0.103792495 container start 5f848337c76dcd35f8bca3e82c79f64bdc9435e3a6f42fef241d2026d1a92a8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Nov 26 11:48:56 compute-0 podman[197541]: 2025-11-26 11:48:56.172043293 +0000 UTC m=+0.108453399 container attach 5f848337c76dcd35f8bca3e82c79f64bdc9435e3a6f42fef241d2026d1a92a8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 26 11:48:56 compute-0 podman[197541]: 2025-11-26 11:48:56.087277714 +0000 UTC m=+0.023687820 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:48:56 compute-0 sudo[197711]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytljjeaoqdawbkswxovcgjcbsyxhlumq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157736.2182813-409-176269288425375/AnsiballZ_systemd.py'
Nov 26 11:48:56 compute-0 sudo[197711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:48:56 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:48:56 compute-0 python3.9[197713]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 11:48:56 compute-0 ceph-mon[74928]: pgmap v411: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:48:56 compute-0 sudo[197711]: pam_unix(sudo:session): session closed for user root
Nov 26 11:48:56 compute-0 eager_vaughan[197557]: {
Nov 26 11:48:56 compute-0 eager_vaughan[197557]:     "2627095b-eef8-4027-bfef-68bf7cb6801f": {
Nov 26 11:48:56 compute-0 eager_vaughan[197557]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:48:56 compute-0 eager_vaughan[197557]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 11:48:56 compute-0 eager_vaughan[197557]:         "osd_id": 1,
Nov 26 11:48:56 compute-0 eager_vaughan[197557]:         "osd_uuid": "2627095b-eef8-4027-bfef-68bf7cb6801f",
Nov 26 11:48:56 compute-0 eager_vaughan[197557]:         "type": "bluestore"
Nov 26 11:48:56 compute-0 eager_vaughan[197557]:     },
Nov 26 11:48:56 compute-0 eager_vaughan[197557]:     "a9ad59a0-aa2e-4d92-b571-519d2d145b6a": {
Nov 26 11:48:56 compute-0 eager_vaughan[197557]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:48:56 compute-0 eager_vaughan[197557]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 11:48:56 compute-0 eager_vaughan[197557]:         "osd_id": 0,
Nov 26 11:48:56 compute-0 eager_vaughan[197557]:         "osd_uuid": "a9ad59a0-aa2e-4d92-b571-519d2d145b6a",
Nov 26 11:48:56 compute-0 eager_vaughan[197557]:         "type": "bluestore"
Nov 26 11:48:56 compute-0 eager_vaughan[197557]:     },
Nov 26 11:48:56 compute-0 eager_vaughan[197557]:     "d56156fb-7361-4bef-b06b-1320109b4323": {
Nov 26 11:48:56 compute-0 eager_vaughan[197557]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:48:56 compute-0 eager_vaughan[197557]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 11:48:56 compute-0 eager_vaughan[197557]:         "osd_id": 2,
Nov 26 11:48:56 compute-0 eager_vaughan[197557]:         "osd_uuid": "d56156fb-7361-4bef-b06b-1320109b4323",
Nov 26 11:48:56 compute-0 eager_vaughan[197557]:         "type": "bluestore"
Nov 26 11:48:56 compute-0 eager_vaughan[197557]:     }
Nov 26 11:48:56 compute-0 eager_vaughan[197557]: }
Nov 26 11:48:56 compute-0 systemd[1]: libpod-5f848337c76dcd35f8bca3e82c79f64bdc9435e3a6f42fef241d2026d1a92a8a.scope: Deactivated successfully.
Nov 26 11:48:56 compute-0 conmon[197557]: conmon 5f848337c76dcd35f8bc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5f848337c76dcd35f8bca3e82c79f64bdc9435e3a6f42fef241d2026d1a92a8a.scope/container/memory.events
Nov 26 11:48:56 compute-0 podman[197541]: 2025-11-26 11:48:56.933269226 +0000 UTC m=+0.869679312 container died 5f848337c76dcd35f8bca3e82c79f64bdc9435e3a6f42fef241d2026d1a92a8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_vaughan, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 26 11:48:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-256bbde8491fb931df0e78e5980307efb53fb77691cc140cf58b78198d6e7b1f-merged.mount: Deactivated successfully.
Nov 26 11:48:56 compute-0 podman[197541]: 2025-11-26 11:48:56.964626255 +0000 UTC m=+0.901036341 container remove 5f848337c76dcd35f8bca3e82c79f64bdc9435e3a6f42fef241d2026d1a92a8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_vaughan, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:48:56 compute-0 systemd[1]: libpod-conmon-5f848337c76dcd35f8bca3e82c79f64bdc9435e3a6f42fef241d2026d1a92a8a.scope: Deactivated successfully.
Nov 26 11:48:56 compute-0 sudo[197348]: pam_unix(sudo:session): session closed for user root
Nov 26 11:48:56 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 11:48:56 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:48:56 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 11:48:57 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:48:57 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 3109d41c-8d02-4e26-b67b-0e5be60d740a does not exist
Nov 26 11:48:57 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 9cd1b1c2-f24a-42b8-b1eb-72527b4a05b9 does not exist
Nov 26 11:48:57 compute-0 sudo[197924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wouojfzoywxausmdqhhcadijdcfdalgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157736.8156633-409-7930780477696/AnsiballZ_systemd.py'
Nov 26 11:48:57 compute-0 sudo[197924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:48:57 compute-0 sudo[197887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:48:57 compute-0 sudo[197887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:48:57 compute-0 sudo[197887]: pam_unix(sudo:session): session closed for user root
Nov 26 11:48:57 compute-0 sudo[197932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 26 11:48:57 compute-0 sudo[197932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:48:57 compute-0 sudo[197932]: pam_unix(sudo:session): session closed for user root
Nov 26 11:48:57 compute-0 python3.9[197929]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 11:48:57 compute-0 sudo[197924]: pam_unix(sudo:session): session closed for user root
Nov 26 11:48:57 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v412: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:48:57 compute-0 sudo[198109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjrytxrzbbpxgwqwcptpsnryhsofuemg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157737.4337766-409-60697824789418/AnsiballZ_systemd.py'
Nov 26 11:48:57 compute-0 sudo[198109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:48:57 compute-0 python3.9[198111]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 11:48:57 compute-0 sudo[198109]: pam_unix(sudo:session): session closed for user root
Nov 26 11:48:57 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:48:57 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:48:57 compute-0 ceph-mon[74928]: pgmap v412: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:48:58 compute-0 sudo[198264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmxsnprtrbzohlreelchaxdzcmtkcdcb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157738.0266526-409-205152502245960/AnsiballZ_systemd.py'
Nov 26 11:48:58 compute-0 sudo[198264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:48:58 compute-0 python3.9[198266]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 11:48:58 compute-0 sudo[198264]: pam_unix(sudo:session): session closed for user root
Nov 26 11:48:58 compute-0 sudo[198419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bypploufwhlxgjwfhsuaismekxresmww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157738.6092837-409-244048090181366/AnsiballZ_systemd.py'
Nov 26 11:48:58 compute-0 sudo[198419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:48:59 compute-0 python3.9[198421]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 11:48:59 compute-0 sudo[198419]: pam_unix(sudo:session): session closed for user root
Nov 26 11:48:59 compute-0 sudo[198574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtnrsxrgbhhtsxlfnpxqqithbgnojfnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157739.1886907-409-45560553206166/AnsiballZ_systemd.py'
Nov 26 11:48:59 compute-0 sudo[198574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:48:59 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v413: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:48:59 compute-0 python3.9[198576]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 11:48:59 compute-0 sudo[198574]: pam_unix(sudo:session): session closed for user root
Nov 26 11:48:59 compute-0 sudo[198729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvsihuabvecjszdgfhiqrefondqcvzdg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157739.7704732-409-140166342077679/AnsiballZ_systemd.py'
Nov 26 11:48:59 compute-0 sudo[198729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:00 compute-0 python3.9[198731]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 11:49:00 compute-0 sudo[198729]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:00 compute-0 ceph-mon[74928]: pgmap v413: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:49:00 compute-0 sudo[198884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tobdpjkfqyzqgzjjdxyuvxgkrlsethfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157740.5087624-511-203668181193009/AnsiballZ_file.py'
Nov 26 11:49:00 compute-0 sudo[198884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:00 compute-0 python3.9[198886]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:49:00 compute-0 sudo[198884]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:01 compute-0 sudo[199036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcihgyujzxdnwvliiujskyvnmmeiwxcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157740.9594564-511-55133675979202/AnsiballZ_file.py'
Nov 26 11:49:01 compute-0 sudo[199036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:01 compute-0 python3.9[199038]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:49:01 compute-0 sudo[199036]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:01 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v414: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:49:01 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:49:01 compute-0 sudo[199188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqggegtahwuzsiamiureuxrcvhjommmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157741.3977766-511-77169979127652/AnsiballZ_file.py'
Nov 26 11:49:01 compute-0 sudo[199188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:01 compute-0 python3.9[199190]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:49:01 compute-0 sudo[199188]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:02 compute-0 sudo[199340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouchemvxrpjfublojbfebvgfunuvahjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157741.8467546-511-185134024358718/AnsiballZ_file.py'
Nov 26 11:49:02 compute-0 sudo[199340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:02 compute-0 python3.9[199342]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:49:02 compute-0 sudo[199340]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:02 compute-0 auditd[671]: Audit daemon rotating log files
Nov 26 11:49:02 compute-0 sudo[199492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpvafuftzvlwgdyfmtupxokaqqhajpea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157742.2958703-511-140206262966543/AnsiballZ_file.py'
Nov 26 11:49:02 compute-0 sudo[199492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:02 compute-0 ceph-mon[74928]: pgmap v414: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:49:02 compute-0 python3.9[199494]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:49:02 compute-0 sudo[199492]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:02 compute-0 sudo[199644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fglbvrhjqhkiiztfqeyvcepeqdwjgccv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157742.7350173-511-177807013312435/AnsiballZ_file.py'
Nov 26 11:49:02 compute-0 sudo[199644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:49:02.982 159928 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 11:49:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:49:02.983 159928 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 11:49:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:49:02.983 159928 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 11:49:03 compute-0 python3.9[199646]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:49:03 compute-0 sudo[199644]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:03 compute-0 sudo[199796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xonwxjqtnnieecryevocihndzhaqctsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157743.2103093-554-66656848319145/AnsiballZ_stat.py'
Nov 26 11:49:03 compute-0 sudo[199796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:03 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v415: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:49:03 compute-0 python3.9[199798]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:49:03 compute-0 sudo[199796]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:04 compute-0 sudo[199931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flqusyzuzwwgxjkdbghplxkayiqzqqdv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157743.2103093-554-66656848319145/AnsiballZ_copy.py'
Nov 26 11:49:04 compute-0 sudo[199931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:04 compute-0 podman[199895]: 2025-11-26 11:49:04.100298404 +0000 UTC m=+0.039895512 container health_status 5f401b83ca5f86ed33da7d010b1da1561564f07ce95b2e756c93a36d59e54803 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 26 11:49:04 compute-0 python3.9[199939]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764157743.2103093-554-66656848319145/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:49:04 compute-0 sudo[199931]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:04 compute-0 sudo[200089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npfdndodtkiokbsqhishivjxouvceaft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157744.3601375-554-165997852154367/AnsiballZ_stat.py'
Nov 26 11:49:04 compute-0 sudo[200089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:04 compute-0 ceph-mon[74928]: pgmap v415: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:49:04 compute-0 python3.9[200091]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:49:04 compute-0 sudo[200089]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:04 compute-0 sudo[200214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flqvndyxuvrzmkkolbowpnajninpbdrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157744.3601375-554-165997852154367/AnsiballZ_copy.py'
Nov 26 11:49:04 compute-0 sudo[200214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:05 compute-0 python3.9[200216]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764157744.3601375-554-165997852154367/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:49:05 compute-0 sudo[200214]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:05 compute-0 sudo[200366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtzecfwqxlhhhgdcuxvsbkoebzvozhtd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157745.2083962-554-6348044188435/AnsiballZ_stat.py'
Nov 26 11:49:05 compute-0 sudo[200366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:05 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v416: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:49:05 compute-0 python3.9[200368]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:49:05 compute-0 sudo[200366]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:05 compute-0 sudo[200491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zueevcpyvawykwctsixshddyylzlefce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157745.2083962-554-6348044188435/AnsiballZ_copy.py'
Nov 26 11:49:05 compute-0 sudo[200491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:05 compute-0 python3.9[200493]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764157745.2083962-554-6348044188435/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:49:05 compute-0 sudo[200491]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:06 compute-0 sudo[200643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rprutyxyfcmxuaupeufnvmhfvtuwbbzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157746.0587392-554-251916424271840/AnsiballZ_stat.py'
Nov 26 11:49:06 compute-0 sudo[200643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:06 compute-0 python3.9[200645]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:49:06 compute-0 sudo[200643]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:06 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:49:06 compute-0 ceph-mon[74928]: pgmap v416: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:49:06 compute-0 sudo[200768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-woiwwljicfsmxbemyeicxxszzljztpip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157746.0587392-554-251916424271840/AnsiballZ_copy.py'
Nov 26 11:49:06 compute-0 sudo[200768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:06 compute-0 python3.9[200770]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764157746.0587392-554-251916424271840/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:49:06 compute-0 sudo[200768]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:07 compute-0 sudo[200920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xovikxjqfysnsuigwwoqkecjoaciwshg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157746.8914027-554-256337557102669/AnsiballZ_stat.py'
Nov 26 11:49:07 compute-0 sudo[200920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:07 compute-0 python3.9[200922]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:49:07 compute-0 sudo[200920]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:07 compute-0 sudo[201045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yizmifmajqlxhordxyrpuxrckofcmktq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157746.8914027-554-256337557102669/AnsiballZ_copy.py'
Nov 26 11:49:07 compute-0 sudo[201045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:07 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v417: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:49:07 compute-0 python3.9[201047]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764157746.8914027-554-256337557102669/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:49:07 compute-0 sudo[201045]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:07 compute-0 sudo[201197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urxtcgpsrfovarmiirpwufnmecmwsaxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157747.7270684-554-99923430765438/AnsiballZ_stat.py'
Nov 26 11:49:07 compute-0 sudo[201197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:08 compute-0 python3.9[201199]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:49:08 compute-0 sudo[201197]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:08 compute-0 sudo[201322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qiubvqydhgnopugdfjlcpaapjkigpvuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157747.7270684-554-99923430765438/AnsiballZ_copy.py'
Nov 26 11:49:08 compute-0 sudo[201322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:08 compute-0 python3.9[201324]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764157747.7270684-554-99923430765438/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:49:08 compute-0 sudo[201322]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:08 compute-0 ceph-mon[74928]: pgmap v417: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:49:08 compute-0 sudo[201474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtyxrqnonspbygqmeokhsudstbhfwfso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157748.5676699-554-196029636217926/AnsiballZ_stat.py'
Nov 26 11:49:08 compute-0 sudo[201474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:08 compute-0 python3.9[201476]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:49:08 compute-0 sudo[201474]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:09 compute-0 sudo[201597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywvgtkglmkeobrkxyhuyanolvrwmrbyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157748.5676699-554-196029636217926/AnsiballZ_copy.py'
Nov 26 11:49:09 compute-0 sudo[201597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:09 compute-0 python3.9[201599]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764157748.5676699-554-196029636217926/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:49:09 compute-0 sudo[201597]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:09 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v418: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:49:09 compute-0 sudo[201749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdqocpzxqbbjyeuhqejzzrfzkwxvplur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157749.3963635-554-203875473941518/AnsiballZ_stat.py'
Nov 26 11:49:09 compute-0 sudo[201749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:09 compute-0 python3.9[201751]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:49:09 compute-0 sudo[201749]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:09 compute-0 sudo[201874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lijmltuzpebysmcdbgacoefxyaikaxxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157749.3963635-554-203875473941518/AnsiballZ_copy.py'
Nov 26 11:49:09 compute-0 sudo[201874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:10 compute-0 python3.9[201876]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764157749.3963635-554-203875473941518/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:49:10 compute-0 sudo[201874]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:10 compute-0 sudo[202026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apqudnuagxsfrgmzbdvibhusrqnkehnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157750.275713-667-73896930098738/AnsiballZ_command.py'
Nov 26 11:49:10 compute-0 sudo[202026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:10 compute-0 ceph-mon[74928]: pgmap v418: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:49:10 compute-0 python3.9[202028]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Nov 26 11:49:10 compute-0 sudo[202026]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:10 compute-0 sudo[202179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymkoxhlmedmcbsuqwvbgllixjwxasjha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157750.7969337-676-247278086923239/AnsiballZ_file.py'
Nov 26 11:49:10 compute-0 sudo[202179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:11 compute-0 python3.9[202181]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:49:11 compute-0 sudo[202179]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:49:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:49:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:49:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:49:11 compute-0 sudo[202331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypeodcvfszmibbcimznwupmxzjrcchmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157751.2502582-676-135841300660833/AnsiballZ_file.py'
Nov 26 11:49:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:49:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:49:11 compute-0 sudo[202331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:11 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v419: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:49:11 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:49:11 compute-0 python3.9[202333]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:49:11 compute-0 sudo[202331]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:11 compute-0 sudo[202483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahozqpmrvmpjxvpthamknagvcvozuqvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157751.790418-676-161184816967530/AnsiballZ_file.py'
Nov 26 11:49:11 compute-0 sudo[202483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:12 compute-0 python3.9[202485]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:49:12 compute-0 sudo[202483]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:12 compute-0 sudo[202635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmtzioeglunhhnfpcjegsfzvlszunahy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157752.240082-676-148542454640719/AnsiballZ_file.py'
Nov 26 11:49:12 compute-0 sudo[202635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:12 compute-0 python3.9[202637]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:49:12 compute-0 sudo[202635]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:12 compute-0 ceph-mon[74928]: pgmap v419: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:49:12 compute-0 sudo[202787]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iavygwohpetcwrqcqangtvcfmvfbaujn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157752.6758075-676-220849280046109/AnsiballZ_file.py'
Nov 26 11:49:12 compute-0 sudo[202787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:13 compute-0 python3.9[202789]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:49:13 compute-0 sudo[202787]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:13 compute-0 sudo[202939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-suuyarlqqmmflmnqzpuczkrzngvsvjbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157753.119123-676-178861269593480/AnsiballZ_file.py'
Nov 26 11:49:13 compute-0 sudo[202939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:13 compute-0 python3.9[202941]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:49:13 compute-0 sudo[202939]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:13 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v420: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:49:13 compute-0 sudo[203091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yniurhzeibawmvalyexmxpbmbunlqvwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157753.5538933-676-29253582635082/AnsiballZ_file.py'
Nov 26 11:49:13 compute-0 sudo[203091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:13 compute-0 python3.9[203093]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:49:13 compute-0 sudo[203091]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:14 compute-0 sudo[203243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hctdohwjtcfdnkxaynvejnfxsmwwsewj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157753.9930894-676-59958841953763/AnsiballZ_file.py'
Nov 26 11:49:14 compute-0 sudo[203243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:14 compute-0 python3.9[203245]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:49:14 compute-0 sudo[203243]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:14 compute-0 ceph-mon[74928]: pgmap v420: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:49:14 compute-0 sudo[203406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onqvwjcxctdcznwznkiurjhynikuctou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157754.4295654-676-232245897295881/AnsiballZ_file.py'
Nov 26 11:49:14 compute-0 sudo[203406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:14 compute-0 podman[203369]: 2025-11-26 11:49:14.638893913 +0000 UTC m=+0.061164316 container health_status cfbeb1268b73bea59211ec8c025ee3818e11127880f6c0675b5fbea39bdd577e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:49:14 compute-0 python3.9[203415]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:49:14 compute-0 sudo[203406]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:15 compute-0 sudo[203571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bufeemlwjochayxgehzttprlelaggrir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157754.8874726-676-60028979940643/AnsiballZ_file.py'
Nov 26 11:49:15 compute-0 sudo[203571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:15 compute-0 python3.9[203573]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:49:15 compute-0 sudo[203571]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:15 compute-0 sudo[203723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckupgcezgiaqzxhhbhmwnojmuxvdzrkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157755.3249238-676-168323351155478/AnsiballZ_file.py'
Nov 26 11:49:15 compute-0 sudo[203723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:15 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v421: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:49:15 compute-0 python3.9[203725]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:49:15 compute-0 sudo[203723]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:15 compute-0 sudo[203875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czdhapjakodijkwcuxfvibavnqyyvmob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157755.7649457-676-265709914817108/AnsiballZ_file.py'
Nov 26 11:49:15 compute-0 sudo[203875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:16 compute-0 python3.9[203877]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:49:16 compute-0 sudo[203875]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:16 compute-0 sudo[204027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqxgejhqaxueqkzyxcqybhxxqqsjyzcy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157756.2089777-676-262462569073662/AnsiballZ_file.py'
Nov 26 11:49:16 compute-0 sudo[204027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:16 compute-0 python3.9[204029]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:49:16 compute-0 sudo[204027]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:16 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:49:16 compute-0 ceph-mon[74928]: pgmap v421: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:49:16 compute-0 sudo[204179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tuibrlyidaeyeysinpatfnlfwkogfjjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157756.64743-676-61116799189498/AnsiballZ_file.py'
Nov 26 11:49:16 compute-0 sudo[204179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:16 compute-0 python3.9[204181]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:49:16 compute-0 sudo[204179]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:17 compute-0 sudo[204331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhocithjkdxdffgwttcjgwcalufsknmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157757.1207607-775-26496956348895/AnsiballZ_stat.py'
Nov 26 11:49:17 compute-0 sudo[204331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:17 compute-0 python3.9[204333]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:49:17 compute-0 sudo[204331]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:17 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v422: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:49:17 compute-0 sudo[204454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvmfwuroircoiilpjwcibuzdexxhjugr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157757.1207607-775-26496956348895/AnsiballZ_copy.py'
Nov 26 11:49:17 compute-0 sudo[204454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:17 compute-0 python3.9[204456]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764157757.1207607-775-26496956348895/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:49:17 compute-0 sudo[204454]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:18 compute-0 sudo[204606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqxntcnbucuwylhewpsbucnfdjgrdlzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157757.9561303-775-116026803239815/AnsiballZ_stat.py'
Nov 26 11:49:18 compute-0 sudo[204606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:18 compute-0 python3.9[204608]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:49:18 compute-0 sudo[204606]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:18 compute-0 sudo[204729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzcpbvjameeepfwvozyyeljtytbyqrcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157757.9561303-775-116026803239815/AnsiballZ_copy.py'
Nov 26 11:49:18 compute-0 sudo[204729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:18 compute-0 ceph-mon[74928]: pgmap v422: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:49:18 compute-0 python3.9[204731]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764157757.9561303-775-116026803239815/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:49:18 compute-0 sudo[204729]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:18 compute-0 sudo[204881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzitbespnasllbmwpdvfwzadwlgixkgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157758.773692-775-13184174782463/AnsiballZ_stat.py'
Nov 26 11:49:18 compute-0 sudo[204881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:19 compute-0 python3.9[204883]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:49:19 compute-0 sudo[204881]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:19 compute-0 sudo[205004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcwflwuqbvesftsjvvdsmwntspyzveun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157758.773692-775-13184174782463/AnsiballZ_copy.py'
Nov 26 11:49:19 compute-0 sudo[205004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:19 compute-0 python3.9[205006]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764157758.773692-775-13184174782463/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:49:19 compute-0 sudo[205004]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:19 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v423: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:49:19 compute-0 sudo[205156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdlvvcyjqeblzpzaxwoxftcnbfwnzmcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157759.582791-775-36558266865343/AnsiballZ_stat.py'
Nov 26 11:49:19 compute-0 sudo[205156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:19 compute-0 python3.9[205158]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:49:19 compute-0 sudo[205156]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:20 compute-0 sudo[205279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmqxsfflvynrnxiwdkaqouzsknrvsppi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157759.582791-775-36558266865343/AnsiballZ_copy.py'
Nov 26 11:49:20 compute-0 sudo[205279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:20 compute-0 python3.9[205281]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764157759.582791-775-36558266865343/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:49:20 compute-0 sudo[205279]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:20 compute-0 sudo[205431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukojgwvumfgquowptpahkezpcckxdxcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157760.4007246-775-75348603422786/AnsiballZ_stat.py'
Nov 26 11:49:20 compute-0 sudo[205431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:20 compute-0 ceph-mon[74928]: pgmap v423: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:49:20 compute-0 python3.9[205433]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:49:20 compute-0 sudo[205431]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:20 compute-0 sudo[205554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbrzadykavvnytnkpjjimhnockhhicij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157760.4007246-775-75348603422786/AnsiballZ_copy.py'
Nov 26 11:49:20 compute-0 sudo[205554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:20 compute-0 ceph-osd[88091]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 11:49:20 compute-0 ceph-osd[88091]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 5490 writes, 23K keys, 5490 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5490 writes, 826 syncs, 6.65 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5490 writes, 23K keys, 5490 commit groups, 1.0 writes per commit group, ingest: 18.37 MB, 0.03 MB/s
                                           Interval WAL: 5490 writes, 826 syncs, 6.65 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a3da971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a3da971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a3da971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a3da971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.7      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.7      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.7      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a3da971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a3da971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a3da971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a3da97090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a3da97090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.3      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.3      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.3      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a3da97090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a3da971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a3da971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 26 11:49:21 compute-0 python3.9[205556]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764157760.4007246-775-75348603422786/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:49:21 compute-0 sudo[205554]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:21 compute-0 sudo[205706]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mczqqhijxglltxhreuoukxsubkgtppru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157761.2049892-775-234283173490029/AnsiballZ_stat.py'
Nov 26 11:49:21 compute-0 sudo[205706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:21 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v424: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:49:21 compute-0 python3.9[205708]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:49:21 compute-0 sudo[205706]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:21 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:49:21 compute-0 sudo[205829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbawrsyzhxjytmsxwxxjilchuprlonaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157761.2049892-775-234283173490029/AnsiballZ_copy.py'
Nov 26 11:49:21 compute-0 sudo[205829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:21 compute-0 python3.9[205831]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764157761.2049892-775-234283173490029/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:49:21 compute-0 sudo[205829]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:22 compute-0 sudo[205981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxhzluwtstypblxiydaqvzvmabxtyvmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157762.0182538-775-218605686929632/AnsiballZ_stat.py'
Nov 26 11:49:22 compute-0 sudo[205981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:22 compute-0 python3.9[205983]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:49:22 compute-0 sudo[205981]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:22 compute-0 sudo[206104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vondcayluwigvdlqwjnvvsuzichfouym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157762.0182538-775-218605686929632/AnsiballZ_copy.py'
Nov 26 11:49:22 compute-0 sudo[206104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:22 compute-0 ceph-mon[74928]: pgmap v424: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:49:22 compute-0 python3.9[206106]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764157762.0182538-775-218605686929632/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:49:22 compute-0 sudo[206104]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:23 compute-0 sudo[206256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktwesetrsbmpsluxmpkuqfevmidwatmw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157762.8336809-775-261232751411173/AnsiballZ_stat.py'
Nov 26 11:49:23 compute-0 sudo[206256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:23 compute-0 python3.9[206258]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:49:23 compute-0 sudo[206256]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:23 compute-0 sudo[206379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbbmtomhjfendbttxrcycyhqsldhsumz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157762.8336809-775-261232751411173/AnsiballZ_copy.py'
Nov 26 11:49:23 compute-0 sudo[206379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:23 compute-0 python3.9[206381]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764157762.8336809-775-261232751411173/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:49:23 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v425: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:49:23 compute-0 sudo[206379]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:23 compute-0 sudo[206531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enasjqdajhqcngkhgwwqsjujhtbovrgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157763.6381822-775-70843124799633/AnsiballZ_stat.py'
Nov 26 11:49:23 compute-0 sudo[206531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:23 compute-0 python3.9[206533]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:49:23 compute-0 sudo[206531]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:24 compute-0 sudo[206654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzqgxcqxzwpopqvylvsksqfldmvosqmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157763.6381822-775-70843124799633/AnsiballZ_copy.py'
Nov 26 11:49:24 compute-0 sudo[206654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:24 compute-0 python3.9[206656]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764157763.6381822-775-70843124799633/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:49:24 compute-0 sudo[206654]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:24 compute-0 ceph-osd[89074]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 11:49:24 compute-0 ceph-osd[89074]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 6688 writes, 27K keys, 6688 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 6688 writes, 1232 syncs, 5.43 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6688 writes, 27K keys, 6688 commit groups, 1.0 writes per commit group, ingest: 19.31 MB, 0.03 MB/s
                                           Interval WAL: 6688 writes, 1232 syncs, 5.43 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c9696ab1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 1.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c9696ab1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 1.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c9696ab1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 1.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c9696ab1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 1.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.1      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.1      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.1      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c9696ab1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 1.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c9696ab1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 1.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c9696ab1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 1.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c9696ab090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c9696ab090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c9696ab090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c9696ab1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 1.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c9696ab1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 1.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 26 11:49:24 compute-0 sudo[206806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqayymjaqcpkkgytgfiqgpscdczjtdmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157764.4274228-775-255352752465545/AnsiballZ_stat.py'
Nov 26 11:49:24 compute-0 sudo[206806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:24 compute-0 ceph-mon[74928]: pgmap v425: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:49:24 compute-0 python3.9[206808]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:49:24 compute-0 sudo[206806]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:24 compute-0 sudo[206929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnvndfxyhlbavsmoovobwlmbilzwpllw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157764.4274228-775-255352752465545/AnsiballZ_copy.py'
Nov 26 11:49:24 compute-0 sudo[206929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:25 compute-0 python3.9[206931]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764157764.4274228-775-255352752465545/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:49:25 compute-0 sudo[206929]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:25 compute-0 sudo[207081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzhiepwtrchtcdfjkxahfjfhcgwxyswq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157765.2173789-775-266716806358413/AnsiballZ_stat.py'
Nov 26 11:49:25 compute-0 sudo[207081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:25 compute-0 python3.9[207083]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:49:25 compute-0 sudo[207081]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:25 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v426: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:49:25 compute-0 sudo[207204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzbxbzeuoghtqudghvwshabdxzmitiix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157765.2173789-775-266716806358413/AnsiballZ_copy.py'
Nov 26 11:49:25 compute-0 sudo[207204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:25 compute-0 python3.9[207206]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764157765.2173789-775-266716806358413/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:49:25 compute-0 sudo[207204]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:26 compute-0 sudo[207356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qiqayhycckfynmponxjhxraljxynbivq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157765.9962425-775-243618306121035/AnsiballZ_stat.py'
Nov 26 11:49:26 compute-0 sudo[207356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:26 compute-0 python3.9[207358]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:49:26 compute-0 sudo[207356]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:26 compute-0 sudo[207479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqsngbjzbwfyoxciahsfmhvmdoiaitsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157765.9962425-775-243618306121035/AnsiballZ_copy.py'
Nov 26 11:49:26 compute-0 sudo[207479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:26 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:49:26 compute-0 ceph-mon[74928]: pgmap v426: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:49:26 compute-0 python3.9[207481]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764157765.9962425-775-243618306121035/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:49:26 compute-0 sudo[207479]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:26 compute-0 sudo[207631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfpjpmtakhseamrbomaccepmhgzsfmyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157766.7709827-775-78729284333223/AnsiballZ_stat.py'
Nov 26 11:49:26 compute-0 sudo[207631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:27 compute-0 python3.9[207633]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:49:27 compute-0 sudo[207631]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:27 compute-0 sudo[207754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awhxmrbitpcdnwkiczbrsvanbjdvfgcn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157766.7709827-775-78729284333223/AnsiballZ_copy.py'
Nov 26 11:49:27 compute-0 sudo[207754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:27 compute-0 python3.9[207756]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764157766.7709827-775-78729284333223/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:49:27 compute-0 sudo[207754]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:27 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v427: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:49:27 compute-0 sudo[207906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-voubeosdovfjsjrmbobtraklluhvllav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157767.6087575-775-108010662846696/AnsiballZ_stat.py'
Nov 26 11:49:27 compute-0 sudo[207906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:27 compute-0 python3.9[207908]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:49:27 compute-0 sudo[207906]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:28 compute-0 sudo[208029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zntqvzvrwhmfrrjpqiigzbsoimfekjfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157767.6087575-775-108010662846696/AnsiballZ_copy.py'
Nov 26 11:49:28 compute-0 sudo[208029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:28 compute-0 python3.9[208031]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764157767.6087575-775-108010662846696/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:49:28 compute-0 ceph-osd[90047]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 11:49:28 compute-0 ceph-osd[90047]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 5518 writes, 23K keys, 5518 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5518 writes, 814 syncs, 6.78 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5515 writes, 23K keys, 5515 commit groups, 1.0 writes per commit group, ingest: 18.24 MB, 0.03 MB/s
                                           Interval WAL: 5515 writes, 813 syncs, 6.78 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fea48511f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fea48511f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fea48511f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fea48511f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fea48511f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fea48511f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fea48511f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fea4851090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 3e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fea4851090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 3e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fea4851090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 3e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fea48511f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fea48511f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 26 11:49:28 compute-0 sudo[208029]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:28 compute-0 ceph-mon[74928]: pgmap v427: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:49:28 compute-0 python3.9[208181]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:49:29 compute-0 ceph-mgr[75197]: [devicehealth INFO root] Check health
Nov 26 11:49:29 compute-0 sudo[208334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-infamlwfqnsvqjhdphxiokcpmtxdgmpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157768.9490962-981-137723702931867/AnsiballZ_seboolean.py'
Nov 26 11:49:29 compute-0 sudo[208334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:29 compute-0 python3.9[208336]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Nov 26 11:49:29 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v428: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:49:30 compute-0 sudo[208334]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:30 compute-0 sudo[208490]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnzhccblstbinfglbluthxhewwdjqrdl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157770.333058-989-132463713510357/AnsiballZ_copy.py'
Nov 26 11:49:30 compute-0 dbus-broker-launch[733]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Nov 26 11:49:30 compute-0 sudo[208490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:30 compute-0 ceph-mon[74928]: pgmap v428: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:49:30 compute-0 python3.9[208492]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:49:30 compute-0 sudo[208490]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:30 compute-0 sudo[208642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-laacibukuanvrqqivjcdfzfabiewized ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157770.7723694-989-179568177375608/AnsiballZ_copy.py'
Nov 26 11:49:30 compute-0 sudo[208642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:31 compute-0 python3.9[208644]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:49:31 compute-0 sudo[208642]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:31 compute-0 sudo[208794]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrnioqmomvuyvbnnjudtgedbzkgfzwub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157771.214212-989-98862428360620/AnsiballZ_copy.py'
Nov 26 11:49:31 compute-0 sudo[208794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:31 compute-0 python3.9[208796]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:49:31 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v429: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:49:31 compute-0 sudo[208794]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:31 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:49:31 compute-0 sudo[208946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wchwffwmpujjiayzoxammmumzlioenoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157771.6479604-989-98274277570485/AnsiballZ_copy.py'
Nov 26 11:49:31 compute-0 sudo[208946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:31 compute-0 python3.9[208948]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:49:31 compute-0 sudo[208946]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:32 compute-0 sudo[209098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txsfgppstlrngvbioakafqusprgzczom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157772.088451-989-170557982447226/AnsiballZ_copy.py'
Nov 26 11:49:32 compute-0 sudo[209098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:32 compute-0 python3.9[209100]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:49:32 compute-0 sudo[209098]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:32 compute-0 ceph-mon[74928]: pgmap v429: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:49:32 compute-0 sudo[209250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bppwxquxnjhgrkfxkakzdqdhyzrqweqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157772.5789464-1025-149188419210126/AnsiballZ_copy.py'
Nov 26 11:49:32 compute-0 sudo[209250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:32 compute-0 python3.9[209252]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:49:32 compute-0 sudo[209250]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:33 compute-0 sudo[209402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zuasljrkfrgowuoswukxlijchefoyzla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157773.0226145-1025-20263401845705/AnsiballZ_copy.py'
Nov 26 11:49:33 compute-0 sudo[209402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:33 compute-0 python3.9[209404]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:49:33 compute-0 sudo[209402]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:33 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v430: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:49:33 compute-0 sudo[209554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frqrldtumjzibnmejlyrnrirpixhauhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157773.45997-1025-242072461942649/AnsiballZ_copy.py'
Nov 26 11:49:33 compute-0 sudo[209554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:33 compute-0 python3.9[209556]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:49:33 compute-0 sudo[209554]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:34 compute-0 sudo[209706]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elnhwguwfwdbeikogrcfrubujlpgwwgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157773.922678-1025-275475612829702/AnsiballZ_copy.py'
Nov 26 11:49:34 compute-0 sudo[209706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:34 compute-0 podman[209708]: 2025-11-26 11:49:34.17627932 +0000 UTC m=+0.039949043 container health_status 5f401b83ca5f86ed33da7d010b1da1561564f07ce95b2e756c93a36d59e54803 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 26 11:49:34 compute-0 python3.9[209709]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:49:34 compute-0 sudo[209706]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:34 compute-0 sudo[209874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrbgbukvzlaostayyqdgkrwgqxruggos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157774.392922-1025-84427936761153/AnsiballZ_copy.py'
Nov 26 11:49:34 compute-0 sudo[209874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:34 compute-0 ceph-mon[74928]: pgmap v430: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:49:34 compute-0 python3.9[209876]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:49:34 compute-0 sudo[209874]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:35 compute-0 sudo[210026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpszfocdiiermtligxcxqxvcniovquxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157774.8739457-1061-150763892939391/AnsiballZ_systemd.py'
Nov 26 11:49:35 compute-0 sudo[210026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:35 compute-0 python3.9[210028]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 11:49:35 compute-0 systemd[1]: Reloading.
Nov 26 11:49:35 compute-0 systemd-sysv-generator[210055]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:49:35 compute-0 systemd-rc-local-generator[210051]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:49:35 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v431: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:49:35 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Nov 26 11:49:35 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Nov 26 11:49:35 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Nov 26 11:49:35 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Nov 26 11:49:35 compute-0 systemd[1]: Starting libvirt logging daemon...
Nov 26 11:49:35 compute-0 systemd[1]: Started libvirt logging daemon.
Nov 26 11:49:35 compute-0 sudo[210026]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:35 compute-0 sudo[210219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thbbnxfylekbxzduduigstifaqkhklwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157775.7839448-1061-94207465283554/AnsiballZ_systemd.py'
Nov 26 11:49:35 compute-0 sudo[210219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:36 compute-0 python3.9[210221]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 11:49:36 compute-0 systemd[1]: Reloading.
Nov 26 11:49:36 compute-0 systemd-rc-local-generator[210242]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:49:36 compute-0 systemd-sysv-generator[210245]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:49:36 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Nov 26 11:49:36 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Nov 26 11:49:36 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Nov 26 11:49:36 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Nov 26 11:49:36 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Nov 26 11:49:36 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Nov 26 11:49:36 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Nov 26 11:49:36 compute-0 systemd[1]: Started libvirt nodedev daemon.
Nov 26 11:49:36 compute-0 sudo[210219]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:36 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:49:36 compute-0 ceph-mon[74928]: pgmap v431: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:49:36 compute-0 sudo[210435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fopeccypifhditnmvjucyoosondvgubl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157776.6326768-1061-135728539130443/AnsiballZ_systemd.py'
Nov 26 11:49:36 compute-0 sudo[210435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:37 compute-0 python3.9[210437]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 11:49:37 compute-0 systemd[1]: Reloading.
Nov 26 11:49:37 compute-0 systemd-rc-local-generator[210463]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:49:37 compute-0 systemd-sysv-generator[210467]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:49:37 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Nov 26 11:49:37 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Nov 26 11:49:37 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Nov 26 11:49:37 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Nov 26 11:49:37 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Nov 26 11:49:37 compute-0 systemd[1]: Starting libvirt proxy daemon...
Nov 26 11:49:37 compute-0 systemd[1]: Started libvirt proxy daemon.
Nov 26 11:49:37 compute-0 sudo[210435]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:37 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v432: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:49:37 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Nov 26 11:49:37 compute-0 sudo[210646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-viqiuxdygvsmoizkflbuemasavwmjcbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157777.494963-1061-163272419846173/AnsiballZ_systemd.py'
Nov 26 11:49:37 compute-0 sudo[210646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:37 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Nov 26 11:49:37 compute-0 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Nov 26 11:49:37 compute-0 python3.9[210650]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 11:49:37 compute-0 systemd[1]: Reloading.
Nov 26 11:49:38 compute-0 systemd-sysv-generator[210685]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:49:38 compute-0 systemd-rc-local-generator[210681]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:49:38 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Nov 26 11:49:38 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Nov 26 11:49:38 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Nov 26 11:49:38 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Nov 26 11:49:38 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Nov 26 11:49:38 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Nov 26 11:49:38 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Nov 26 11:49:38 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Nov 26 11:49:38 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Nov 26 11:49:38 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Nov 26 11:49:38 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Nov 26 11:49:38 compute-0 systemd[1]: Started libvirt QEMU daemon.
Nov 26 11:49:38 compute-0 sudo[210646]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:38 compute-0 setroubleshoot[210473]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l a83f4ee4-78ba-4cb5-bd5c-79412d5ef523
Nov 26 11:49:38 compute-0 setroubleshoot[210473]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Nov 26 11:49:38 compute-0 setroubleshoot[210473]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l a83f4ee4-78ba-4cb5-bd5c-79412d5ef523
Nov 26 11:49:38 compute-0 setroubleshoot[210473]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Nov 26 11:49:38 compute-0 sudo[210871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mympprhnhmpcaldjqgiohpzrsiralrig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157778.4003744-1061-38495763965009/AnsiballZ_systemd.py'
Nov 26 11:49:38 compute-0 sudo[210871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:38 compute-0 ceph-mon[74928]: pgmap v432: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:49:38 compute-0 python3.9[210873]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 11:49:38 compute-0 systemd[1]: Reloading.
Nov 26 11:49:38 compute-0 systemd-sysv-generator[210898]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:49:38 compute-0 systemd-rc-local-generator[210894]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:49:39 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Nov 26 11:49:39 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Nov 26 11:49:39 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Nov 26 11:49:39 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Nov 26 11:49:39 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Nov 26 11:49:39 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Nov 26 11:49:39 compute-0 systemd[1]: Starting libvirt secret daemon...
Nov 26 11:49:39 compute-0 systemd[1]: Started libvirt secret daemon.
Nov 26 11:49:39 compute-0 sudo[210871]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:39 compute-0 sudo[211083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcuotcvbxajcnnoetjdxaimpwrwaxyuw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157779.3590374-1098-182244696865850/AnsiballZ_file.py'
Nov 26 11:49:39 compute-0 sudo[211083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:39 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v433: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:49:39 compute-0 python3.9[211085]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:49:39 compute-0 sudo[211083]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:40 compute-0 sudo[211235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ermcvqlbwmbcotwleakpbaoxkoqxpyin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157779.8342948-1106-227058267679636/AnsiballZ_find.py'
Nov 26 11:49:40 compute-0 sudo[211235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:40 compute-0 python3.9[211237]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 26 11:49:40 compute-0 sudo[211235]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:40 compute-0 sudo[211387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcjnpcfsbsxstyxdolqodwavifagnawe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157780.3126116-1114-77123910849340/AnsiballZ_command.py'
Nov 26 11:49:40 compute-0 sudo[211387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:40 compute-0 python3.9[211389]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                             echo ceph
                                             awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:49:40 compute-0 ceph-mon[74928]: pgmap v433: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:49:40 compute-0 sudo[211387]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:41 compute-0 python3.9[211543]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 26 11:49:41 compute-0 ceph-mgr[75197]: [balancer INFO root] Optimize plan auto_2025-11-26_11:49:41
Nov 26 11:49:41 compute-0 ceph-mgr[75197]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 11:49:41 compute-0 ceph-mgr[75197]: [balancer INFO root] do_upmap
Nov 26 11:49:41 compute-0 ceph-mgr[75197]: [balancer INFO root] pools ['vms', '.rgw.root', 'cephfs.cephfs.data', 'backups', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.control', 'images', '.mgr', 'default.rgw.meta', 'default.rgw.log']
Nov 26 11:49:41 compute-0 ceph-mgr[75197]: [balancer INFO root] prepared 0/10 changes
Nov 26 11:49:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:49:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:49:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:49:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:49:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:49:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:49:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 11:49:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 11:49:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 11:49:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 11:49:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 11:49:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 11:49:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 11:49:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 11:49:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 11:49:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 11:49:41 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v434: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:49:41 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:49:41 compute-0 python3.9[211693]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:49:42 compute-0 python3.9[211814]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764157781.4229054-1133-104856195921971/.source.xml follow=False _original_basename=secret.xml.j2 checksum=b1799f3b875e0916b0504cd9b0d2df6d27079a30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:49:42 compute-0 sudo[211964]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gywmuygaccbwnlvuaddyrngsplmqxpdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157782.293712-1148-215399925627034/AnsiballZ_command.py'
Nov 26 11:49:42 compute-0 sudo[211964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:42 compute-0 python3.9[211966]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine ebab460c-3fd7-5f66-aa87-e10c143123f7
                                             virsh secret-define --file /tmp/secret.xml
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:49:42 compute-0 polkitd[43470]: Registered Authentication Agent for unix-process:211968:255597 (system bus name :1.2719 [pkttyagent --process 211968 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Nov 26 11:49:42 compute-0 ceph-mon[74928]: pgmap v434: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:49:42 compute-0 polkitd[43470]: Unregistered Authentication Agent for unix-process:211968:255597 (system bus name :1.2719, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Nov 26 11:49:42 compute-0 polkitd[43470]: Registered Authentication Agent for unix-process:211967:255597 (system bus name :1.2720 [pkttyagent --process 211967 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Nov 26 11:49:42 compute-0 polkitd[43470]: Unregistered Authentication Agent for unix-process:211967:255597 (system bus name :1.2720, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Nov 26 11:49:42 compute-0 sudo[211964]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:43 compute-0 python3.9[212128]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:49:43 compute-0 sudo[212278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldrxynlwgdkvyvkuyoykayfsrmwsuxyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157783.3171031-1164-208053497905911/AnsiballZ_command.py'
Nov 26 11:49:43 compute-0 sudo[212278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:43 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v435: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:49:43 compute-0 sudo[212278]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:43 compute-0 sudo[212431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kybhmdtfosrcegtvpcucbxnfcvxgmlcd ; FSID=ebab460c-3fd7-5f66-aa87-e10c143123f7 KEY=AQCA5iZpAAAAABAAL6WSWuWVfNotwlMauF3Tqw== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157783.8053527-1172-202918888942940/AnsiballZ_command.py'
Nov 26 11:49:43 compute-0 sudo[212431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:44 compute-0 polkitd[43470]: Registered Authentication Agent for unix-process:212434:255749 (system bus name :1.2723 [pkttyagent --process 212434 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Nov 26 11:49:44 compute-0 polkitd[43470]: Unregistered Authentication Agent for unix-process:212434:255749 (system bus name :1.2723, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Nov 26 11:49:44 compute-0 sudo[212431]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:44 compute-0 sudo[212589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysganxymxmwylnyylvlqanfjsxwyfvlk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157784.3314009-1180-177353804039539/AnsiballZ_copy.py'
Nov 26 11:49:44 compute-0 sudo[212589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:44 compute-0 ceph-mon[74928]: pgmap v435: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:49:44 compute-0 python3.9[212591]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:49:44 compute-0 sudo[212589]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:45 compute-0 sudo[212750]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fonqbhmwpwomvjiednpakxyuajfhduxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157784.917902-1188-243642816123750/AnsiballZ_stat.py'
Nov 26 11:49:45 compute-0 sudo[212750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:45 compute-0 podman[212715]: 2025-11-26 11:49:45.165285897 +0000 UTC m=+0.058439463 container health_status cfbeb1268b73bea59211ec8c025ee3818e11127880f6c0675b5fbea39bdd577e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 26 11:49:45 compute-0 python3.9[212759]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:49:45 compute-0 sudo[212750]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:45 compute-0 sudo[212887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qolamysowdrcehmkdifelygxldwnsbnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157784.917902-1188-243642816123750/AnsiballZ_copy.py'
Nov 26 11:49:45 compute-0 sudo[212887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:45 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v436: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:49:45 compute-0 python3.9[212889]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764157784.917902-1188-243642816123750/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:49:45 compute-0 sudo[212887]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:46 compute-0 sudo[213039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmrqfulvaokfozcqtbritcxqbwdupnwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157785.9230502-1204-78988113538377/AnsiballZ_file.py'
Nov 26 11:49:46 compute-0 sudo[213039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:46 compute-0 python3.9[213041]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:49:46 compute-0 sudo[213039]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:46 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:49:46 compute-0 sudo[213191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ieqtwcjreqdcgpsbdpddfmigthploohx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157786.3917897-1212-115040233391666/AnsiballZ_stat.py'
Nov 26 11:49:46 compute-0 sudo[213191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:46 compute-0 ceph-mon[74928]: pgmap v436: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:49:46 compute-0 python3.9[213193]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:49:46 compute-0 sudo[213191]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:46 compute-0 sudo[213269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhmtogmfqkgoicrsujojvdbaepisfkgy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157786.3917897-1212-115040233391666/AnsiballZ_file.py'
Nov 26 11:49:46 compute-0 sudo[213269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:47 compute-0 python3.9[213271]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:49:47 compute-0 sudo[213269]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:47 compute-0 sudo[213421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvmnnylidfaycsikwhhyboivxvjlcjyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157787.2353148-1224-145288663252099/AnsiballZ_stat.py'
Nov 26 11:49:47 compute-0 sudo[213421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:47 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v437: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:49:47 compute-0 python3.9[213423]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:49:47 compute-0 sudo[213421]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:47 compute-0 sudo[213499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmufhugeuxiztzvwmeoduxzuemdxnlqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157787.2353148-1224-145288663252099/AnsiballZ_file.py'
Nov 26 11:49:47 compute-0 sudo[213499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:47 compute-0 python3.9[213501]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.21ui2nhw recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:49:47 compute-0 sudo[213499]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:48 compute-0 sudo[213651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgxrpwmsxaqgxqbabhnvozqbxxlylpws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157788.0374513-1236-66294376542605/AnsiballZ_stat.py'
Nov 26 11:49:48 compute-0 sudo[213651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:48 compute-0 python3.9[213653]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:49:48 compute-0 sudo[213651]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:48 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Nov 26 11:49:48 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Nov 26 11:49:48 compute-0 sudo[213729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zunghmmkxmbrdbsktdzedpfybtsaxaxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157788.0374513-1236-66294376542605/AnsiballZ_file.py'
Nov 26 11:49:48 compute-0 sudo[213729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:48 compute-0 ceph-mon[74928]: pgmap v437: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:49:48 compute-0 python3.9[213731]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:49:48 compute-0 sudo[213729]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:49 compute-0 sudo[213881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chztqbapdrztrixlxuzmnavkryzusmfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157788.8835325-1249-211552662807959/AnsiballZ_command.py'
Nov 26 11:49:49 compute-0 sudo[213881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:49 compute-0 python3.9[213883]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:49:49 compute-0 sudo[213881]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:49 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v438: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:49:49 compute-0 sudo[214034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqyhounvniwqfkcdnidpgrlqwgqahhms ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764157789.3694634-1257-86195035883049/AnsiballZ_edpm_nftables_from_files.py'
Nov 26 11:49:49 compute-0 sudo[214034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:49 compute-0 python3[214036]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 26 11:49:49 compute-0 sudo[214034]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:50 compute-0 sudo[214186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgljvtphtqyaryuaxjbwtxhvyzqbwvvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157789.9619718-1265-225085710920006/AnsiballZ_stat.py'
Nov 26 11:49:50 compute-0 sudo[214186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:50 compute-0 python3.9[214188]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:49:50 compute-0 sudo[214186]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:50 compute-0 sudo[214264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uiupduodwkrwkobotkfplxtqwqacxyca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157789.9619718-1265-225085710920006/AnsiballZ_file.py'
Nov 26 11:49:50 compute-0 sudo[214264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:50 compute-0 python3.9[214266]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:49:50 compute-0 sudo[214264]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:50 compute-0 ceph-mon[74928]: pgmap v438: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:49:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 11:49:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:49:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 11:49:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:49:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:49:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:49:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:49:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:49:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:49:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:49:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:49:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:49:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 26 11:49:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:49:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:49:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:49:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 11:49:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:49:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 11:49:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:49:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:49:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:49:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 11:49:50 compute-0 sudo[214416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkcfiwnrsnkypefhhdilyfaatqazkjvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157790.7807767-1277-20310300287835/AnsiballZ_stat.py'
Nov 26 11:49:50 compute-0 sudo[214416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:51 compute-0 python3.9[214418]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:49:51 compute-0 sudo[214416]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:51 compute-0 sudo[214494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-riixhvznddfrwclwmuescdurwocjihmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157790.7807767-1277-20310300287835/AnsiballZ_file.py'
Nov 26 11:49:51 compute-0 sudo[214494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:51 compute-0 python3.9[214496]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:49:51 compute-0 sudo[214494]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:51 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v439: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:49:51 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:49:51 compute-0 sudo[214646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qirymvhabjcvyqcorlovsqdkwcjueotv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157791.5969093-1289-210717993438190/AnsiballZ_stat.py'
Nov 26 11:49:51 compute-0 sudo[214646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:51 compute-0 python3.9[214648]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:49:51 compute-0 sudo[214646]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:52 compute-0 sudo[214724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eechinlpeardkgjuxujzvnhpcmycztid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157791.5969093-1289-210717993438190/AnsiballZ_file.py'
Nov 26 11:49:52 compute-0 sudo[214724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:52 compute-0 python3.9[214726]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:49:52 compute-0 sudo[214724]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:52 compute-0 sudo[214876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgtyugyzitzmzzsbnozqzmhqowttfytx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157792.4052782-1301-268371825703315/AnsiballZ_stat.py'
Nov 26 11:49:52 compute-0 sudo[214876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:52 compute-0 ceph-mon[74928]: pgmap v439: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:49:52 compute-0 python3.9[214878]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:49:52 compute-0 sudo[214876]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:52 compute-0 sudo[214954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwcgpxcigowzvugrlcngvxlddtrrklts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157792.4052782-1301-268371825703315/AnsiballZ_file.py'
Nov 26 11:49:52 compute-0 sudo[214954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:53 compute-0 python3.9[214956]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:49:53 compute-0 sudo[214954]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:53 compute-0 sudo[215106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsrqhvgjpztjjafdereerjzdqvfzuxpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157793.1951272-1313-78532480399891/AnsiballZ_stat.py'
Nov 26 11:49:53 compute-0 sudo[215106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:53 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v440: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:49:53 compute-0 python3.9[215108]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:49:53 compute-0 sudo[215106]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:53 compute-0 sudo[215231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysbmheawtgnelhluebjtordswefliqxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157793.1951272-1313-78532480399891/AnsiballZ_copy.py'
Nov 26 11:49:53 compute-0 sudo[215231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:53 compute-0 python3.9[215233]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764157793.1951272-1313-78532480399891/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:49:54 compute-0 sudo[215231]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:54 compute-0 sudo[215383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yktcwdjpmtlvikzuwlvkerrbmhikxhsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157794.1371-1328-105435423777330/AnsiballZ_file.py'
Nov 26 11:49:54 compute-0 sudo[215383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:54 compute-0 python3.9[215385]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:49:54 compute-0 sudo[215383]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:54 compute-0 ceph-mon[74928]: pgmap v440: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:49:54 compute-0 sudo[215535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wydrvkbymwzhrwrmzorucyodffqxklzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157794.5951607-1336-64806715784211/AnsiballZ_command.py'
Nov 26 11:49:54 compute-0 sudo[215535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:54 compute-0 python3.9[215537]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:49:54 compute-0 sudo[215535]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:55 compute-0 sudo[215690]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-byqixbecisubayrczidpdyvxahroyvth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157795.0867429-1344-260617124454959/AnsiballZ_blockinfile.py'
Nov 26 11:49:55 compute-0 sudo[215690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:55 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v441: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:49:55 compute-0 python3.9[215692]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:49:55 compute-0 sudo[215690]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:55 compute-0 sudo[215842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjyhfwwzcemxqrntakqovdtmcgsnohzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157795.7540314-1353-32494417774505/AnsiballZ_command.py'
Nov 26 11:49:55 compute-0 sudo[215842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:56 compute-0 python3.9[215844]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:49:56 compute-0 sudo[215842]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:56 compute-0 sudo[215995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtnbwzdzvmyhlntcdndwlsofrdhxetlv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157796.2383158-1361-92008010825760/AnsiballZ_stat.py'
Nov 26 11:49:56 compute-0 sudo[215995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:56 compute-0 python3.9[215997]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 11:49:56 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:49:56 compute-0 sudo[215995]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:56 compute-0 ceph-mon[74928]: pgmap v441: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:49:56 compute-0 sudo[216149]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxkpqkgujgkoblmwqappryjinagubpyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157796.7125607-1369-97247271494867/AnsiballZ_command.py'
Nov 26 11:49:56 compute-0 sudo[216149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:57 compute-0 python3.9[216151]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:49:57 compute-0 sudo[216149]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:57 compute-0 sudo[216156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:49:57 compute-0 sudo[216156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:49:57 compute-0 sudo[216156]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:57 compute-0 sudo[216204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:49:57 compute-0 sudo[216204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:49:57 compute-0 sudo[216204]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:57 compute-0 sudo[216230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:49:57 compute-0 sudo[216230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:49:57 compute-0 sudo[216230]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:57 compute-0 sudo[216278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 26 11:49:57 compute-0 sudo[216278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:49:57 compute-0 sudo[216414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onruuhkmyubfpfmxkwvfivyfvbarggmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157797.2054865-1377-97621551883222/AnsiballZ_file.py'
Nov 26 11:49:57 compute-0 sudo[216414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:57 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v442: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:49:57 compute-0 python3.9[216418]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:49:57 compute-0 sudo[216414]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:57 compute-0 sudo[216278]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:57 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 26 11:49:57 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 26 11:49:57 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 11:49:57 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:49:57 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 11:49:57 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 11:49:57 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 11:49:57 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:49:57 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev db3ac142-36ec-4294-9f64-e987094603af does not exist
Nov 26 11:49:57 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 4e92ea2c-43f5-48db-a321-46184dd30bf5 does not exist
Nov 26 11:49:57 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 4164b3e2-9943-4d78-b4ef-2297483cd77d does not exist
Nov 26 11:49:57 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 11:49:57 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 11:49:57 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 11:49:57 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 11:49:57 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 11:49:57 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:49:57 compute-0 sudo[216460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:49:57 compute-0 sudo[216460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:49:57 compute-0 sudo[216460]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:57 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 26 11:49:57 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:49:57 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 11:49:57 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:49:57 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 11:49:57 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 11:49:57 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:49:57 compute-0 sudo[216485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:49:57 compute-0 sudo[216485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:49:57 compute-0 sudo[216485]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:57 compute-0 sudo[216534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:49:57 compute-0 sudo[216534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:49:57 compute-0 sudo[216534]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:57 compute-0 sudo[216587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 26 11:49:57 compute-0 sudo[216587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:49:57 compute-0 sudo[216685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xikvbixkidcttuniigmvapexyzhfpvpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157797.7047586-1385-183057684547017/AnsiballZ_stat.py'
Nov 26 11:49:57 compute-0 sudo[216685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:58 compute-0 podman[216719]: 2025-11-26 11:49:58.031261949 +0000 UTC m=+0.028517766 container create a57fad813bf4513b49dfd0daa4447512bffbfeeea4b5a1c30ce0ee0c24cf6afd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_perlman, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 26 11:49:58 compute-0 python3.9[216689]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:49:58 compute-0 systemd[1]: Started libpod-conmon-a57fad813bf4513b49dfd0daa4447512bffbfeeea4b5a1c30ce0ee0c24cf6afd.scope.
Nov 26 11:49:58 compute-0 sudo[216685]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:58 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:49:58 compute-0 podman[216719]: 2025-11-26 11:49:58.083892692 +0000 UTC m=+0.081148519 container init a57fad813bf4513b49dfd0daa4447512bffbfeeea4b5a1c30ce0ee0c24cf6afd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_perlman, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 26 11:49:58 compute-0 podman[216719]: 2025-11-26 11:49:58.090162287 +0000 UTC m=+0.087418104 container start a57fad813bf4513b49dfd0daa4447512bffbfeeea4b5a1c30ce0ee0c24cf6afd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_perlman, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 26 11:49:58 compute-0 podman[216719]: 2025-11-26 11:49:58.091253695 +0000 UTC m=+0.088509513 container attach a57fad813bf4513b49dfd0daa4447512bffbfeeea4b5a1c30ce0ee0c24cf6afd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 26 11:49:58 compute-0 distracted_perlman[216733]: 167 167
Nov 26 11:49:58 compute-0 systemd[1]: libpod-a57fad813bf4513b49dfd0daa4447512bffbfeeea4b5a1c30ce0ee0c24cf6afd.scope: Deactivated successfully.
Nov 26 11:49:58 compute-0 podman[216719]: 2025-11-26 11:49:58.094051944 +0000 UTC m=+0.091307781 container died a57fad813bf4513b49dfd0daa4447512bffbfeeea4b5a1c30ce0ee0c24cf6afd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_perlman, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:49:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e921593738b110b79e5db793da88a49d90a010bbdedee4f037ec439ad1a59f7-merged.mount: Deactivated successfully.
Nov 26 11:49:58 compute-0 podman[216719]: 2025-11-26 11:49:58.11402346 +0000 UTC m=+0.111279277 container remove a57fad813bf4513b49dfd0daa4447512bffbfeeea4b5a1c30ce0ee0c24cf6afd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_perlman, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:49:58 compute-0 podman[216719]: 2025-11-26 11:49:58.019467004 +0000 UTC m=+0.016722841 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:49:58 compute-0 systemd[1]: libpod-conmon-a57fad813bf4513b49dfd0daa4447512bffbfeeea4b5a1c30ce0ee0c24cf6afd.scope: Deactivated successfully.
Nov 26 11:49:58 compute-0 podman[216814]: 2025-11-26 11:49:58.238412069 +0000 UTC m=+0.029546787 container create a543aca4764d0777227b747a423eb4cf12f69c4f1ec09c5d3245b4d3b47e433f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_jackson, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:49:58 compute-0 systemd[1]: Started libpod-conmon-a543aca4764d0777227b747a423eb4cf12f69c4f1ec09c5d3245b4d3b47e433f.scope.
Nov 26 11:49:58 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:49:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d82a234c39213934abddd16da94c6d30a746b48e611cdb624b45a48f9ce5419e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:49:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d82a234c39213934abddd16da94c6d30a746b48e611cdb624b45a48f9ce5419e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:49:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d82a234c39213934abddd16da94c6d30a746b48e611cdb624b45a48f9ce5419e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:49:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d82a234c39213934abddd16da94c6d30a746b48e611cdb624b45a48f9ce5419e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:49:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d82a234c39213934abddd16da94c6d30a746b48e611cdb624b45a48f9ce5419e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 11:49:58 compute-0 podman[216814]: 2025-11-26 11:49:58.298182138 +0000 UTC m=+0.089316874 container init a543aca4764d0777227b747a423eb4cf12f69c4f1ec09c5d3245b4d3b47e433f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_jackson, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:49:58 compute-0 podman[216814]: 2025-11-26 11:49:58.303735521 +0000 UTC m=+0.094870238 container start a543aca4764d0777227b747a423eb4cf12f69c4f1ec09c5d3245b4d3b47e433f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_jackson, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:49:58 compute-0 podman[216814]: 2025-11-26 11:49:58.305029091 +0000 UTC m=+0.096163808 container attach a543aca4764d0777227b747a423eb4cf12f69c4f1ec09c5d3245b4d3b47e433f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_jackson, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:49:58 compute-0 podman[216814]: 2025-11-26 11:49:58.226464808 +0000 UTC m=+0.017599544 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:49:58 compute-0 sudo[216893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-myeposxejdjnglepelspkzkmwhhimkdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157797.7047586-1385-183057684547017/AnsiballZ_copy.py'
Nov 26 11:49:58 compute-0 sudo[216893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:58 compute-0 python3.9[216895]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764157797.7047586-1385-183057684547017/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:49:58 compute-0 sudo[216893]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:58 compute-0 ceph-mon[74928]: pgmap v442: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:49:58 compute-0 sudo[217045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqepnrxmmomhgqegdmgzzuiiqhqnqbkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157798.6406963-1400-149532073622683/AnsiballZ_stat.py'
Nov 26 11:49:58 compute-0 sudo[217045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:58 compute-0 python3.9[217047]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:49:58 compute-0 sudo[217045]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:59 compute-0 nifty_jackson[216858]: --> passed data devices: 0 physical, 3 LVM
Nov 26 11:49:59 compute-0 nifty_jackson[216858]: --> relative data size: 1.0
Nov 26 11:49:59 compute-0 nifty_jackson[216858]: --> All data devices are unavailable
Nov 26 11:49:59 compute-0 systemd[1]: libpod-a543aca4764d0777227b747a423eb4cf12f69c4f1ec09c5d3245b4d3b47e433f.scope: Deactivated successfully.
Nov 26 11:49:59 compute-0 podman[216814]: 2025-11-26 11:49:59.139306387 +0000 UTC m=+0.930441114 container died a543aca4764d0777227b747a423eb4cf12f69c4f1ec09c5d3245b4d3b47e433f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_jackson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:49:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-d82a234c39213934abddd16da94c6d30a746b48e611cdb624b45a48f9ce5419e-merged.mount: Deactivated successfully.
Nov 26 11:49:59 compute-0 podman[216814]: 2025-11-26 11:49:59.172341813 +0000 UTC m=+0.963476521 container remove a543aca4764d0777227b747a423eb4cf12f69c4f1ec09c5d3245b4d3b47e433f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:49:59 compute-0 systemd[1]: libpod-conmon-a543aca4764d0777227b747a423eb4cf12f69c4f1ec09c5d3245b4d3b47e433f.scope: Deactivated successfully.
Nov 26 11:49:59 compute-0 sudo[216587]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:59 compute-0 sudo[217105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:49:59 compute-0 sudo[217105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:49:59 compute-0 sudo[217105]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:59 compute-0 sudo[217156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:49:59 compute-0 sudo[217156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:49:59 compute-0 sudo[217156]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:59 compute-0 sudo[217202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:49:59 compute-0 sudo[217202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:49:59 compute-0 sudo[217202]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:59 compute-0 sudo[217251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -- lvm list --format json
Nov 26 11:49:59 compute-0 sudo[217251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:49:59 compute-0 sudo[217301]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egqdwbtmskcozbxghoipgxvytadiagkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157798.6406963-1400-149532073622683/AnsiballZ_copy.py'
Nov 26 11:49:59 compute-0 sudo[217301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:49:59 compute-0 python3.9[217304]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764157798.6406963-1400-149532073622683/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:49:59 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v443: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:49:59 compute-0 sudo[217301]: pam_unix(sudo:session): session closed for user root
Nov 26 11:49:59 compute-0 podman[217337]: 2025-11-26 11:49:59.604472442 +0000 UTC m=+0.029069987 container create d867281718397cac16abfe7addf15c62a580351408f7047f56beb5f7cd3abce0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:49:59 compute-0 systemd[1]: Started libpod-conmon-d867281718397cac16abfe7addf15c62a580351408f7047f56beb5f7cd3abce0.scope.
Nov 26 11:49:59 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:49:59 compute-0 podman[217337]: 2025-11-26 11:49:59.655952795 +0000 UTC m=+0.080550361 container init d867281718397cac16abfe7addf15c62a580351408f7047f56beb5f7cd3abce0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:49:59 compute-0 podman[217337]: 2025-11-26 11:49:59.662939102 +0000 UTC m=+0.087536646 container start d867281718397cac16abfe7addf15c62a580351408f7047f56beb5f7cd3abce0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_poitras, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 26 11:49:59 compute-0 podman[217337]: 2025-11-26 11:49:59.663977561 +0000 UTC m=+0.088575106 container attach d867281718397cac16abfe7addf15c62a580351408f7047f56beb5f7cd3abce0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 26 11:49:59 compute-0 eloquent_poitras[217373]: 167 167
Nov 26 11:49:59 compute-0 podman[217337]: 2025-11-26 11:49:59.666219919 +0000 UTC m=+0.090817464 container died d867281718397cac16abfe7addf15c62a580351408f7047f56beb5f7cd3abce0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_poitras, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:49:59 compute-0 systemd[1]: libpod-d867281718397cac16abfe7addf15c62a580351408f7047f56beb5f7cd3abce0.scope: Deactivated successfully.
Nov 26 11:49:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-550325a5a029d27458e61188cbcab9585c361ecbe2ef8f7d150c8045f17f1979-merged.mount: Deactivated successfully.
Nov 26 11:49:59 compute-0 podman[217337]: 2025-11-26 11:49:59.690226697 +0000 UTC m=+0.114824242 container remove d867281718397cac16abfe7addf15c62a580351408f7047f56beb5f7cd3abce0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_poitras, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True)
Nov 26 11:49:59 compute-0 podman[217337]: 2025-11-26 11:49:59.592576045 +0000 UTC m=+0.017173610 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:49:59 compute-0 systemd[1]: libpod-conmon-d867281718397cac16abfe7addf15c62a580351408f7047f56beb5f7cd3abce0.scope: Deactivated successfully.
Nov 26 11:49:59 compute-0 podman[217469]: 2025-11-26 11:49:59.814434055 +0000 UTC m=+0.029591391 container create e7aed02fb1b65119f6605baff2b6f3b2ab9b1574082861d03ec423222fedeaaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_morse, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:49:59 compute-0 systemd[1]: Started libpod-conmon-e7aed02fb1b65119f6605baff2b6f3b2ab9b1574082861d03ec423222fedeaaa.scope.
Nov 26 11:49:59 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:49:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5467c5316c2606aeea5c53bd503e444f8561222bda4afaf6dae8e82fe4fd6aa3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:49:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5467c5316c2606aeea5c53bd503e444f8561222bda4afaf6dae8e82fe4fd6aa3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:49:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5467c5316c2606aeea5c53bd503e444f8561222bda4afaf6dae8e82fe4fd6aa3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:49:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5467c5316c2606aeea5c53bd503e444f8561222bda4afaf6dae8e82fe4fd6aa3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:49:59 compute-0 podman[217469]: 2025-11-26 11:49:59.868702517 +0000 UTC m=+0.083859863 container init e7aed02fb1b65119f6605baff2b6f3b2ab9b1574082861d03ec423222fedeaaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_morse, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:49:59 compute-0 podman[217469]: 2025-11-26 11:49:59.875957399 +0000 UTC m=+0.091114735 container start e7aed02fb1b65119f6605baff2b6f3b2ab9b1574082861d03ec423222fedeaaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_morse, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:49:59 compute-0 podman[217469]: 2025-11-26 11:49:59.877928326 +0000 UTC m=+0.093085683 container attach e7aed02fb1b65119f6605baff2b6f3b2ab9b1574082861d03ec423222fedeaaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 26 11:49:59 compute-0 podman[217469]: 2025-11-26 11:49:59.801200215 +0000 UTC m=+0.016357572 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:49:59 compute-0 sudo[217540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emwlzuytvxksmlwskxfargjapredgoqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157799.6899657-1415-42900101549911/AnsiballZ_stat.py'
Nov 26 11:49:59 compute-0 sudo[217540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:00 compute-0 python3.9[217542]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:50:00 compute-0 sudo[217540]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:00 compute-0 sudo[217663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahqpcgogqvfkgxqqknqeivzioaluymns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157799.6899657-1415-42900101549911/AnsiballZ_copy.py'
Nov 26 11:50:00 compute-0 sudo[217663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:00 compute-0 python3.9[217665]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764157799.6899657-1415-42900101549911/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:50:00 compute-0 sudo[217663]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:00 compute-0 vigilant_morse[217509]: {
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:     "0": [
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:         {
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:             "devices": [
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:                 "/dev/loop3"
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:             ],
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:             "lv_name": "ceph_lv0",
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:             "lv_size": "21470642176",
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a9ad59a0-aa2e-4d92-b571-519d2d145b6a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:             "lv_uuid": "Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ",
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:             "name": "ceph_lv0",
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:             "tags": {
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:                 "ceph.block_uuid": "Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ",
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:                 "ceph.cluster_name": "ceph",
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:                 "ceph.crush_device_class": "",
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:                 "ceph.encrypted": "0",
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:                 "ceph.osd_fsid": "a9ad59a0-aa2e-4d92-b571-519d2d145b6a",
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:                 "ceph.osd_id": "0",
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:                 "ceph.type": "block",
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:                 "ceph.vdo": "0"
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:             },
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:             "type": "block",
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:             "vg_name": "ceph_vg0"
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:         }
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:     ],
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:     "1": [
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:         {
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:             "devices": [
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:                 "/dev/loop4"
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:             ],
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:             "lv_name": "ceph_lv1",
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:             "lv_size": "21470642176",
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2627095b-eef8-4027-bfef-68bf7cb6801f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:             "lv_uuid": "T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS",
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:             "name": "ceph_lv1",
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:             "tags": {
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:                 "ceph.block_uuid": "T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS",
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:                 "ceph.cluster_name": "ceph",
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:                 "ceph.crush_device_class": "",
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:                 "ceph.encrypted": "0",
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:                 "ceph.osd_fsid": "2627095b-eef8-4027-bfef-68bf7cb6801f",
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:                 "ceph.osd_id": "1",
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:                 "ceph.type": "block",
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:                 "ceph.vdo": "0"
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:             },
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:             "type": "block",
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:             "vg_name": "ceph_vg1"
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:         }
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:     ],
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:     "2": [
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:         {
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:             "devices": [
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:                 "/dev/loop5"
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:             ],
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:             "lv_name": "ceph_lv2",
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:             "lv_size": "21470642176",
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d56156fb-7361-4bef-b06b-1320109b4323,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:             "lv_uuid": "PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF",
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:             "name": "ceph_lv2",
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:             "tags": {
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:                 "ceph.block_uuid": "PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF",
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:                 "ceph.cluster_name": "ceph",
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:                 "ceph.crush_device_class": "",
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:                 "ceph.encrypted": "0",
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:                 "ceph.osd_fsid": "d56156fb-7361-4bef-b06b-1320109b4323",
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:                 "ceph.osd_id": "2",
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:                 "ceph.type": "block",
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:                 "ceph.vdo": "0"
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:             },
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:             "type": "block",
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:             "vg_name": "ceph_vg2"
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:         }
Nov 26 11:50:00 compute-0 vigilant_morse[217509]:     ]
Nov 26 11:50:00 compute-0 vigilant_morse[217509]: }
Nov 26 11:50:00 compute-0 systemd[1]: libpod-e7aed02fb1b65119f6605baff2b6f3b2ab9b1574082861d03ec423222fedeaaa.scope: Deactivated successfully.
Nov 26 11:50:00 compute-0 podman[217469]: 2025-11-26 11:50:00.517021011 +0000 UTC m=+0.732178347 container died e7aed02fb1b65119f6605baff2b6f3b2ab9b1574082861d03ec423222fedeaaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_morse, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:50:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-5467c5316c2606aeea5c53bd503e444f8561222bda4afaf6dae8e82fe4fd6aa3-merged.mount: Deactivated successfully.
Nov 26 11:50:00 compute-0 podman[217469]: 2025-11-26 11:50:00.545788797 +0000 UTC m=+0.760946134 container remove e7aed02fb1b65119f6605baff2b6f3b2ab9b1574082861d03ec423222fedeaaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_morse, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 26 11:50:00 compute-0 systemd[1]: libpod-conmon-e7aed02fb1b65119f6605baff2b6f3b2ab9b1574082861d03ec423222fedeaaa.scope: Deactivated successfully.
Nov 26 11:50:00 compute-0 sudo[217251]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:00 compute-0 sudo[217705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:50:00 compute-0 sudo[217705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:50:00 compute-0 sudo[217705]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:00 compute-0 sudo[217756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:50:00 compute-0 sudo[217756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:50:00 compute-0 sudo[217756]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:00 compute-0 ceph-mon[74928]: pgmap v443: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:50:00 compute-0 sudo[217806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:50:00 compute-0 sudo[217806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:50:00 compute-0 sudo[217806]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:00 compute-0 sudo[217854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -- raw list --format json
Nov 26 11:50:00 compute-0 sudo[217854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:50:00 compute-0 sudo[217929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amediaxunatuaqismkozihqbfdwwrlad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157800.604018-1430-274837772559661/AnsiballZ_systemd.py'
Nov 26 11:50:00 compute-0 sudo[217929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:00 compute-0 podman[217964]: 2025-11-26 11:50:00.985819526 +0000 UTC m=+0.028336575 container create 4d31e8de560ba11c12f1914f5186f586fb0af9554f9efda93ae180cb03674229 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True)
Nov 26 11:50:01 compute-0 systemd[1]: Started libpod-conmon-4d31e8de560ba11c12f1914f5186f586fb0af9554f9efda93ae180cb03674229.scope.
Nov 26 11:50:01 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:50:01 compute-0 podman[217964]: 2025-11-26 11:50:01.036698255 +0000 UTC m=+0.079215324 container init 4d31e8de560ba11c12f1914f5186f586fb0af9554f9efda93ae180cb03674229 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_wilbur, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:50:01 compute-0 podman[217964]: 2025-11-26 11:50:01.041654742 +0000 UTC m=+0.084171791 container start 4d31e8de560ba11c12f1914f5186f586fb0af9554f9efda93ae180cb03674229 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_wilbur, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 26 11:50:01 compute-0 podman[217964]: 2025-11-26 11:50:01.04270883 +0000 UTC m=+0.085225879 container attach 4d31e8de560ba11c12f1914f5186f586fb0af9554f9efda93ae180cb03674229 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_wilbur, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 26 11:50:01 compute-0 elegant_wilbur[217977]: 167 167
Nov 26 11:50:01 compute-0 systemd[1]: libpod-4d31e8de560ba11c12f1914f5186f586fb0af9554f9efda93ae180cb03674229.scope: Deactivated successfully.
Nov 26 11:50:01 compute-0 conmon[217977]: conmon 4d31e8de560ba11c12f1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4d31e8de560ba11c12f1914f5186f586fb0af9554f9efda93ae180cb03674229.scope/container/memory.events
Nov 26 11:50:01 compute-0 podman[217964]: 2025-11-26 11:50:01.046173364 +0000 UTC m=+0.088690414 container died 4d31e8de560ba11c12f1914f5186f586fb0af9554f9efda93ae180cb03674229 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_wilbur, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:50:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-10c3f93e066e4a7bf1c4aa7b9167e75ec3c847a7908125ef65cafcf66b1b2a51-merged.mount: Deactivated successfully.
Nov 26 11:50:01 compute-0 podman[217964]: 2025-11-26 11:50:01.064828688 +0000 UTC m=+0.107345747 container remove 4d31e8de560ba11c12f1914f5186f586fb0af9554f9efda93ae180cb03674229 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_wilbur, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 26 11:50:01 compute-0 podman[217964]: 2025-11-26 11:50:00.973827659 +0000 UTC m=+0.016344729 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:50:01 compute-0 systemd[1]: libpod-conmon-4d31e8de560ba11c12f1914f5186f586fb0af9554f9efda93ae180cb03674229.scope: Deactivated successfully.
Nov 26 11:50:01 compute-0 python3.9[217931]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 11:50:01 compute-0 systemd[1]: Reloading.
Nov 26 11:50:01 compute-0 systemd-sysv-generator[218020]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:50:01 compute-0 systemd-rc-local-generator[218017]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:50:01 compute-0 podman[218026]: 2025-11-26 11:50:01.209178636 +0000 UTC m=+0.029206236 container create 158d026f1df4a4c2a05728d0893d11d2b5be73b7a362f7cf53a1428c4b017489 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 26 11:50:01 compute-0 podman[218026]: 2025-11-26 11:50:01.197073286 +0000 UTC m=+0.017100905 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:50:01 compute-0 systemd[1]: Started libpod-conmon-158d026f1df4a4c2a05728d0893d11d2b5be73b7a362f7cf53a1428c4b017489.scope.
Nov 26 11:50:01 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:50:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e20d188c9ae6379d07d807e6b4d4b097e52233b2001154d7582aecbe855f528/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:50:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e20d188c9ae6379d07d807e6b4d4b097e52233b2001154d7582aecbe855f528/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:50:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e20d188c9ae6379d07d807e6b4d4b097e52233b2001154d7582aecbe855f528/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:50:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e20d188c9ae6379d07d807e6b4d4b097e52233b2001154d7582aecbe855f528/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:50:01 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Nov 26 11:50:01 compute-0 podman[218026]: 2025-11-26 11:50:01.415080101 +0000 UTC m=+0.235107710 container init 158d026f1df4a4c2a05728d0893d11d2b5be73b7a362f7cf53a1428c4b017489 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_lumiere, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:50:01 compute-0 podman[218026]: 2025-11-26 11:50:01.421411382 +0000 UTC m=+0.241438981 container start 158d026f1df4a4c2a05728d0893d11d2b5be73b7a362f7cf53a1428c4b017489 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:50:01 compute-0 podman[218026]: 2025-11-26 11:50:01.422611607 +0000 UTC m=+0.242639226 container attach 158d026f1df4a4c2a05728d0893d11d2b5be73b7a362f7cf53a1428c4b017489 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:50:01 compute-0 sudo[217929]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:01 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v444: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:50:01 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:50:01 compute-0 sudo[218206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdgmzwemwlwqaxftzvryjwzbjtulkitt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157801.5709324-1438-280090895395581/AnsiballZ_systemd.py'
Nov 26 11:50:01 compute-0 sudo[218206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:02 compute-0 python3.9[218208]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 26 11:50:02 compute-0 systemd[1]: Reloading.
Nov 26 11:50:02 compute-0 systemd-sysv-generator[218251]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:50:02 compute-0 systemd-rc-local-generator[218248]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:50:02 compute-0 gifted_lumiere[218050]: {
Nov 26 11:50:02 compute-0 gifted_lumiere[218050]:     "2627095b-eef8-4027-bfef-68bf7cb6801f": {
Nov 26 11:50:02 compute-0 gifted_lumiere[218050]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:50:02 compute-0 gifted_lumiere[218050]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 11:50:02 compute-0 gifted_lumiere[218050]:         "osd_id": 1,
Nov 26 11:50:02 compute-0 gifted_lumiere[218050]:         "osd_uuid": "2627095b-eef8-4027-bfef-68bf7cb6801f",
Nov 26 11:50:02 compute-0 gifted_lumiere[218050]:         "type": "bluestore"
Nov 26 11:50:02 compute-0 gifted_lumiere[218050]:     },
Nov 26 11:50:02 compute-0 gifted_lumiere[218050]:     "a9ad59a0-aa2e-4d92-b571-519d2d145b6a": {
Nov 26 11:50:02 compute-0 gifted_lumiere[218050]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:50:02 compute-0 gifted_lumiere[218050]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 11:50:02 compute-0 gifted_lumiere[218050]:         "osd_id": 0,
Nov 26 11:50:02 compute-0 gifted_lumiere[218050]:         "osd_uuid": "a9ad59a0-aa2e-4d92-b571-519d2d145b6a",
Nov 26 11:50:02 compute-0 gifted_lumiere[218050]:         "type": "bluestore"
Nov 26 11:50:02 compute-0 gifted_lumiere[218050]:     },
Nov 26 11:50:02 compute-0 gifted_lumiere[218050]:     "d56156fb-7361-4bef-b06b-1320109b4323": {
Nov 26 11:50:02 compute-0 gifted_lumiere[218050]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:50:02 compute-0 gifted_lumiere[218050]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 11:50:02 compute-0 gifted_lumiere[218050]:         "osd_id": 2,
Nov 26 11:50:02 compute-0 gifted_lumiere[218050]:         "osd_uuid": "d56156fb-7361-4bef-b06b-1320109b4323",
Nov 26 11:50:02 compute-0 gifted_lumiere[218050]:         "type": "bluestore"
Nov 26 11:50:02 compute-0 gifted_lumiere[218050]:     }
Nov 26 11:50:02 compute-0 gifted_lumiere[218050]: }
Nov 26 11:50:02 compute-0 podman[218272]: 2025-11-26 11:50:02.227277746 +0000 UTC m=+0.020499334 container died 158d026f1df4a4c2a05728d0893d11d2b5be73b7a362f7cf53a1428c4b017489 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_lumiere, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 26 11:50:02 compute-0 systemd[1]: libpod-158d026f1df4a4c2a05728d0893d11d2b5be73b7a362f7cf53a1428c4b017489.scope: Deactivated successfully.
Nov 26 11:50:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e20d188c9ae6379d07d807e6b4d4b097e52233b2001154d7582aecbe855f528-merged.mount: Deactivated successfully.
Nov 26 11:50:02 compute-0 podman[218272]: 2025-11-26 11:50:02.293421063 +0000 UTC m=+0.086642631 container remove 158d026f1df4a4c2a05728d0893d11d2b5be73b7a362f7cf53a1428c4b017489 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_lumiere, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 26 11:50:02 compute-0 systemd[1]: libpod-conmon-158d026f1df4a4c2a05728d0893d11d2b5be73b7a362f7cf53a1428c4b017489.scope: Deactivated successfully.
Nov 26 11:50:02 compute-0 systemd[1]: Reloading.
Nov 26 11:50:02 compute-0 sudo[217854]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:02 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 11:50:02 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:50:02 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 11:50:02 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:50:02 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 79d4d5df-d427-476d-af30-78c96df20760 does not exist
Nov 26 11:50:02 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 458f3cd3-7c5a-43a3-a0db-bc2d824ebf77 does not exist
Nov 26 11:50:02 compute-0 systemd-rc-local-generator[218329]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:50:02 compute-0 systemd-sysv-generator[218336]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:50:02 compute-0 sudo[218287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:50:02 compute-0 sudo[218287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:50:02 compute-0 sudo[218287]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:02 compute-0 sudo[218206]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:02 compute-0 sudo[218346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 26 11:50:02 compute-0 sudo[218346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:50:02 compute-0 sudo[218346]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:02 compute-0 ceph-mon[74928]: pgmap v444: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:50:02 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:50:02 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:50:02 compute-0 sshd-session[160043]: Connection closed by 192.168.122.30 port 51510
Nov 26 11:50:02 compute-0 sshd-session[160040]: pam_unix(sshd:session): session closed for user zuul
Nov 26 11:50:02 compute-0 systemd[1]: session-48.scope: Deactivated successfully.
Nov 26 11:50:02 compute-0 systemd[1]: session-48.scope: Consumed 2min 22.687s CPU time.
Nov 26 11:50:02 compute-0 systemd-logind[744]: Session 48 logged out. Waiting for processes to exit.
Nov 26 11:50:02 compute-0 systemd-logind[744]: Removed session 48.
Nov 26 11:50:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:50:02.983 159928 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 11:50:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:50:02.984 159928 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 11:50:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:50:02.984 159928 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 11:50:03 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v445: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:50:04 compute-0 podman[218395]: 2025-11-26 11:50:04.616172539 +0000 UTC m=+0.038725178 container health_status 5f401b83ca5f86ed33da7d010b1da1561564f07ce95b2e756c93a36d59e54803 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 26 11:50:04 compute-0 ceph-mon[74928]: pgmap v445: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:50:05 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v446: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:50:06 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:50:06 compute-0 ceph-mon[74928]: pgmap v446: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:50:07 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v447: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:50:08 compute-0 sshd-session[218411]: Accepted publickey for zuul from 192.168.122.30 port 39404 ssh2: ECDSA SHA256:Ri1FttUDYah1mI7cQP4QF8tE4Jqe/KzhJvhCRDggR5A
Nov 26 11:50:08 compute-0 systemd-logind[744]: New session 49 of user zuul.
Nov 26 11:50:08 compute-0 systemd[1]: Started Session 49 of User zuul.
Nov 26 11:50:08 compute-0 sshd-session[218411]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 26 11:50:08 compute-0 ceph-mon[74928]: pgmap v447: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:50:09 compute-0 python3.9[218564]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 11:50:09 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v448: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:50:10 compute-0 python3.9[218718]: ansible-ansible.builtin.service_facts Invoked
Nov 26 11:50:10 compute-0 network[218735]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 26 11:50:10 compute-0 network[218736]: 'network-scripts' will be removed from distribution in near future.
Nov 26 11:50:10 compute-0 network[218737]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 26 11:50:10 compute-0 ceph-mon[74928]: pgmap v448: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:50:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:50:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:50:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:50:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:50:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:50:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:50:11 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v449: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:50:11 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:50:12 compute-0 sudo[219007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvdopwfgeqgruhpiwlyypvhcpcpyopzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157812.2243629-47-31203619019468/AnsiballZ_setup.py'
Nov 26 11:50:12 compute-0 sudo[219007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:12 compute-0 python3.9[219009]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 11:50:12 compute-0 ceph-mon[74928]: pgmap v449: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:50:12 compute-0 sudo[219007]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:13 compute-0 sudo[219091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhyerzmxzysivzuwqnmafxqwxorgvnra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157812.2243629-47-31203619019468/AnsiballZ_dnf.py'
Nov 26 11:50:13 compute-0 sudo[219091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:13 compute-0 python3.9[219093]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 11:50:13 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v450: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:50:14 compute-0 ceph-mon[74928]: pgmap v450: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:50:15 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v451: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Nov 26 11:50:15 compute-0 podman[219095]: 2025-11-26 11:50:15.643147025 +0000 UTC m=+0.058307390 container health_status cfbeb1268b73bea59211ec8c025ee3818e11127880f6c0675b5fbea39bdd577e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 26 11:50:16 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:50:16 compute-0 ceph-mon[74928]: pgmap v451: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Nov 26 11:50:17 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v452: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Nov 26 11:50:17 compute-0 sudo[219091]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:17 compute-0 sudo[219268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzjngpisovxxbiswpifeqrmhkyrthoay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157817.7049172-59-158526415307810/AnsiballZ_stat.py'
Nov 26 11:50:17 compute-0 sudo[219268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:18 compute-0 python3.9[219270]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 11:50:18 compute-0 sudo[219268]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:18 compute-0 sudo[219420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrtdyiijeunamcslacahhoxhnpqgfywi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157818.3207622-69-102467252556115/AnsiballZ_command.py'
Nov 26 11:50:18 compute-0 sudo[219420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:18 compute-0 ceph-mon[74928]: pgmap v452: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Nov 26 11:50:18 compute-0 python3.9[219422]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:50:18 compute-0 sudo[219420]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:19 compute-0 sudo[219573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpcupvwpdranejplfqljuioshvqovqbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157818.9977005-79-223707004345922/AnsiballZ_stat.py'
Nov 26 11:50:19 compute-0 sudo[219573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:19 compute-0 python3.9[219575]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 11:50:19 compute-0 sudo[219573]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:19 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v453: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 26 11:50:19 compute-0 sudo[219725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqiuybfaatvkohpqbxaxtiouqmufxowc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157819.4409769-87-195891506916597/AnsiballZ_command.py'
Nov 26 11:50:19 compute-0 sudo[219725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:19 compute-0 python3.9[219727]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:50:19 compute-0 sudo[219725]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:20 compute-0 sudo[219878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urzaoxfeepwkoninpeudngymnrmwtucu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157819.9097052-95-106298783127672/AnsiballZ_stat.py'
Nov 26 11:50:20 compute-0 sudo[219878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:20 compute-0 python3.9[219880]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:50:20 compute-0 sudo[219878]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:20 compute-0 sudo[220001]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nstatwqdzikpmhefwlfqupwwpytiovgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157819.9097052-95-106298783127672/AnsiballZ_copy.py'
Nov 26 11:50:20 compute-0 sudo[220001]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:20 compute-0 ceph-mon[74928]: pgmap v453: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 26 11:50:20 compute-0 python3.9[220003]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764157819.9097052-95-106298783127672/.source.iscsi _original_basename=.z5igc6d4 follow=False checksum=5e48ce3e1f21cbcd1b3f53391e27d3151b502d2e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:50:20 compute-0 sudo[220001]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:21 compute-0 sudo[220153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txlznjyrhtviqodovdkobpuiaqxrbtno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157820.8652246-110-191893765468458/AnsiballZ_file.py'
Nov 26 11:50:21 compute-0 sudo[220153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:21 compute-0 python3.9[220155]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:50:21 compute-0 sudo[220153]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:21 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v454: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 26 11:50:21 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:50:21 compute-0 sudo[220305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dudnkfpoxrnaboqvymmkksbckjibcqii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157821.4654357-118-36666183017716/AnsiballZ_lineinfile.py'
Nov 26 11:50:21 compute-0 sudo[220305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:21 compute-0 python3.9[220307]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:50:21 compute-0 sudo[220305]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:22 compute-0 sudo[220457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ludpsbhiqtxvbrinflzyxcwqaqlwmsqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157822.0854783-127-82343100594935/AnsiballZ_systemd_service.py'
Nov 26 11:50:22 compute-0 sudo[220457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:22 compute-0 ceph-mon[74928]: pgmap v454: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 26 11:50:22 compute-0 python3.9[220459]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 11:50:22 compute-0 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Nov 26 11:50:22 compute-0 sudo[220457]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:23 compute-0 sudo[220613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuwzslsseslarqlwyfziruyuesoslxvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157822.9558072-135-18679625947078/AnsiballZ_systemd_service.py'
Nov 26 11:50:23 compute-0 sudo[220613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:23 compute-0 python3.9[220615]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 11:50:23 compute-0 systemd[1]: Reloading.
Nov 26 11:50:23 compute-0 systemd-rc-local-generator[220638]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:50:23 compute-0 systemd-sysv-generator[220641]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:50:23 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v455: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 26 11:50:23 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 26 11:50:23 compute-0 systemd[1]: Starting Open-iSCSI...
Nov 26 11:50:23 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Nov 26 11:50:23 compute-0 systemd[1]: Started Open-iSCSI.
Nov 26 11:50:23 compute-0 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Nov 26 11:50:23 compute-0 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Nov 26 11:50:23 compute-0 sudo[220613]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:24 compute-0 sudo[220812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epdanzuioilclsndafigojyttrbeppte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157824.024687-146-57055962057280/AnsiballZ_service_facts.py'
Nov 26 11:50:24 compute-0 sudo[220812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:24 compute-0 python3.9[220814]: ansible-ansible.builtin.service_facts Invoked
Nov 26 11:50:24 compute-0 network[220831]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 26 11:50:24 compute-0 network[220832]: 'network-scripts' will be removed from distribution in near future.
Nov 26 11:50:24 compute-0 network[220833]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 26 11:50:24 compute-0 ceph-mon[74928]: pgmap v455: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 26 11:50:25 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v456: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 26 11:50:26 compute-0 sudo[220812]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:26 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:50:26 compute-0 ceph-mon[74928]: pgmap v456: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 26 11:50:26 compute-0 sudo[221103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlzmtateionychutolefcunzptkwrjba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157826.6497176-156-72292841601097/AnsiballZ_file.py'
Nov 26 11:50:26 compute-0 sudo[221103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:26 compute-0 python3.9[221105]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 26 11:50:27 compute-0 sudo[221103]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:27 compute-0 sudo[221255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-doktbapdxwovzjpemimqoofaiycsxndn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157827.1267178-164-55530677988753/AnsiballZ_modprobe.py'
Nov 26 11:50:27 compute-0 sudo[221255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:27 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v457: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Nov 26 11:50:27 compute-0 python3.9[221257]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Nov 26 11:50:27 compute-0 sudo[221255]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:27 compute-0 sudo[221411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gewbiwcnxvmmexwjquydytelbrehpxub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157827.7356188-172-10941694780763/AnsiballZ_stat.py'
Nov 26 11:50:27 compute-0 sudo[221411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:28 compute-0 python3.9[221413]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:50:28 compute-0 sudo[221411]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:28 compute-0 sudo[221534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwjatvvfyxtcspqdqnqfuphgdtjgupfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157827.7356188-172-10941694780763/AnsiballZ_copy.py'
Nov 26 11:50:28 compute-0 sudo[221534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:28 compute-0 python3.9[221536]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764157827.7356188-172-10941694780763/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:50:28 compute-0 sudo[221534]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:28 compute-0 ceph-mon[74928]: pgmap v457: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Nov 26 11:50:28 compute-0 sudo[221686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gthyugbnublokubpjwfhclddtdawvseg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157828.6418474-188-276930766467951/AnsiballZ_lineinfile.py'
Nov 26 11:50:28 compute-0 sudo[221686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:28 compute-0 python3.9[221688]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:50:28 compute-0 sudo[221686]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:29 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v458: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Nov 26 11:50:29 compute-0 sudo[221838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfnnofyxmigybkfrdeolzmtwacomcyyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157829.1179314-196-232448181613939/AnsiballZ_systemd.py'
Nov 26 11:50:29 compute-0 sudo[221838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:29 compute-0 python3.9[221840]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 11:50:29 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 26 11:50:29 compute-0 systemd[1]: Stopped Load Kernel Modules.
Nov 26 11:50:29 compute-0 systemd[1]: Stopping Load Kernel Modules...
Nov 26 11:50:29 compute-0 systemd[1]: Starting Load Kernel Modules...
Nov 26 11:50:29 compute-0 systemd[1]: Finished Load Kernel Modules.
Nov 26 11:50:29 compute-0 sudo[221838]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:30 compute-0 sudo[221994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpwicpizngchnynwqklzxilsritzdjim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157830.0077214-204-88609474840944/AnsiballZ_file.py'
Nov 26 11:50:30 compute-0 sudo[221994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:30 compute-0 python3.9[221996]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:50:30 compute-0 sudo[221994]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:30 compute-0 sudo[222146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tenrfyxbvacbzgvdssmlngkuhwqahqtl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157830.5096204-213-212062630914821/AnsiballZ_stat.py'
Nov 26 11:50:30 compute-0 sudo[222146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:30 compute-0 ceph-mon[74928]: pgmap v458: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Nov 26 11:50:30 compute-0 python3.9[222148]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 11:50:30 compute-0 sudo[222146]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:31 compute-0 sudo[222298]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtfrzgxfxiqfylzrsdgqoarntzrvciry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157831.0009384-222-258304764236969/AnsiballZ_stat.py'
Nov 26 11:50:31 compute-0 sudo[222298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:31 compute-0 python3.9[222300]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 11:50:31 compute-0 sudo[222298]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:31 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v459: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:50:31 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:50:31 compute-0 sudo[222450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwabchzswndmhpvxrryunlsipottuner ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157831.4484873-230-1206650928505/AnsiballZ_stat.py'
Nov 26 11:50:31 compute-0 sudo[222450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:31 compute-0 python3.9[222452]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:50:31 compute-0 sudo[222450]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:32 compute-0 sudo[222573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bguhywtxbvetosziracgoafhypyiaklg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157831.4484873-230-1206650928505/AnsiballZ_copy.py'
Nov 26 11:50:32 compute-0 sudo[222573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:32 compute-0 python3.9[222575]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764157831.4484873-230-1206650928505/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:50:32 compute-0 sudo[222573]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:32 compute-0 sudo[222725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aibbcevtocchyqwczaubmgdglbnqxnqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157832.305576-245-124347843652551/AnsiballZ_command.py'
Nov 26 11:50:32 compute-0 sudo[222725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:32 compute-0 python3.9[222727]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:50:32 compute-0 sudo[222725]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:32 compute-0 ceph-mon[74928]: pgmap v459: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:50:32 compute-0 sudo[222878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whslzqvycghjhujwhgnuvjrtjungmhbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157832.7701788-253-100953800663850/AnsiballZ_lineinfile.py'
Nov 26 11:50:32 compute-0 sudo[222878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:33 compute-0 python3.9[222880]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:50:33 compute-0 sudo[222878]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:33 compute-0 sudo[223030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igcsuyraubaqvwnuzooscmpgryipxtqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157833.236217-261-206338221654010/AnsiballZ_replace.py'
Nov 26 11:50:33 compute-0 sudo[223030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:33 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v460: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:50:33 compute-0 python3.9[223032]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:50:33 compute-0 sudo[223030]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:34 compute-0 sudo[223182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bawughcplrpjdjjubffbfplqyycrgpir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157833.8447578-269-2632629175518/AnsiballZ_replace.py'
Nov 26 11:50:34 compute-0 sudo[223182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:34 compute-0 python3.9[223184]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:50:34 compute-0 sudo[223182]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:34 compute-0 sudo[223334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jiwwcrtrwcnjrgmduxlmiuhfbshfvupk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157834.3440464-278-59885606205459/AnsiballZ_lineinfile.py'
Nov 26 11:50:34 compute-0 sudo[223334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:34 compute-0 python3.9[223336]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:50:34 compute-0 sudo[223334]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:34 compute-0 ceph-mon[74928]: pgmap v460: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:50:34 compute-0 sudo[223495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acoupufavotkatfdwlzhyfrytjdjpgal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157834.768239-278-154885028629559/AnsiballZ_lineinfile.py'
Nov 26 11:50:34 compute-0 sudo[223495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:34 compute-0 podman[223460]: 2025-11-26 11:50:34.955222072 +0000 UTC m=+0.042448900 container health_status 5f401b83ca5f86ed33da7d010b1da1561564f07ce95b2e756c93a36d59e54803 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 26 11:50:35 compute-0 python3.9[223503]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:50:35 compute-0 sudo[223495]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:35 compute-0 sudo[223656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iiuzecwlggwecbaiqhzybkjklujpmswe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157835.204055-278-44885747598190/AnsiballZ_lineinfile.py'
Nov 26 11:50:35 compute-0 sudo[223656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:35 compute-0 python3.9[223658]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:50:35 compute-0 sudo[223656]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:35 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v461: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:50:35 compute-0 sudo[223808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icekumjdaxhxjetcqqlfkxnuthmltfxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157835.6356592-278-111210219423272/AnsiballZ_lineinfile.py'
Nov 26 11:50:35 compute-0 sudo[223808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:35 compute-0 python3.9[223810]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:50:35 compute-0 sudo[223808]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:36 compute-0 sudo[223960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdfcalokhajbuqgkwamlcbslbqwebuym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157836.093483-307-244300651264216/AnsiballZ_stat.py'
Nov 26 11:50:36 compute-0 sudo[223960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:36 compute-0 python3.9[223962]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 11:50:36 compute-0 sudo[223960]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:36 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:50:36 compute-0 ceph-mon[74928]: pgmap v461: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:50:36 compute-0 sudo[224114]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zknxhahvnkinvtxyabkketwgprxrvyon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157836.5647476-315-51225264907866/AnsiballZ_file.py'
Nov 26 11:50:36 compute-0 sudo[224114]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:36 compute-0 python3.9[224116]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:50:36 compute-0 sudo[224114]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:37 compute-0 sudo[224266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfsqvbeltblhamlfidvyvltizgwrewuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157837.1110656-324-89723431287197/AnsiballZ_file.py'
Nov 26 11:50:37 compute-0 sudo[224266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:37 compute-0 python3.9[224268]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:50:37 compute-0 sudo[224266]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:37 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v462: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:50:37 compute-0 sudo[224418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awgkgflvzgnqcaqayxlpbfjgbikgzckj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157837.5840054-332-60619776748238/AnsiballZ_stat.py'
Nov 26 11:50:37 compute-0 sudo[224418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:37 compute-0 python3.9[224420]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:50:37 compute-0 sudo[224418]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:38 compute-0 sudo[224496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykwhysydyubyjsaatqlcqxuquyypypch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157837.5840054-332-60619776748238/AnsiballZ_file.py'
Nov 26 11:50:38 compute-0 sudo[224496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:38 compute-0 python3.9[224498]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:50:38 compute-0 sudo[224496]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:38 compute-0 sudo[224648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvqeeyicadivzpvaanjmcspxtszokbpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157838.3390517-332-13908258264763/AnsiballZ_stat.py'
Nov 26 11:50:38 compute-0 sudo[224648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:38 compute-0 python3.9[224650]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:50:38 compute-0 sudo[224648]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:38 compute-0 ceph-mon[74928]: pgmap v462: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:50:38 compute-0 sudo[224726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsbgrzvjxekuvutyqzainvdtedxamurf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157838.3390517-332-13908258264763/AnsiballZ_file.py'
Nov 26 11:50:38 compute-0 sudo[224726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:38 compute-0 python3.9[224728]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:50:38 compute-0 sudo[224726]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:39 compute-0 sudo[224878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uucwpcrnnbgwpkheiilegayccilmrzvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157839.1139646-355-74324628211920/AnsiballZ_file.py'
Nov 26 11:50:39 compute-0 sudo[224878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:39 compute-0 python3.9[224880]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:50:39 compute-0 sudo[224878]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:39 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v463: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:50:39 compute-0 sudo[225030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igokkjizggcsgbxkvakirrbmxqxurnwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157839.5803945-363-10174047960310/AnsiballZ_stat.py'
Nov 26 11:50:39 compute-0 sudo[225030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:39 compute-0 python3.9[225032]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:50:39 compute-0 sudo[225030]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:40 compute-0 sudo[225108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stxflbmkrwtsymamioymihypzfltmomg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157839.5803945-363-10174047960310/AnsiballZ_file.py'
Nov 26 11:50:40 compute-0 sudo[225108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:40 compute-0 python3.9[225110]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:50:40 compute-0 sudo[225108]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:40 compute-0 sudo[225260]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxnmjfeomjsjckwftnwzgvibmcegppva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157840.3763154-375-65684731431484/AnsiballZ_stat.py'
Nov 26 11:50:40 compute-0 sudo[225260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:40 compute-0 python3.9[225262]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:50:40 compute-0 sudo[225260]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:40 compute-0 ceph-mon[74928]: pgmap v463: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:50:40 compute-0 sudo[225338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkngdtlolegsdeelurmcpgeczjghbqki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157840.3763154-375-65684731431484/AnsiballZ_file.py'
Nov 26 11:50:40 compute-0 sudo[225338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:41 compute-0 python3.9[225340]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:50:41 compute-0 sudo[225338]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:41 compute-0 sudo[225490]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfdjowedkhuwiaiqlxuujriawxxgcnty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157841.1546488-387-60827612821507/AnsiballZ_systemd.py'
Nov 26 11:50:41 compute-0 sudo[225490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:41 compute-0 ceph-mgr[75197]: [balancer INFO root] Optimize plan auto_2025-11-26_11:50:41
Nov 26 11:50:41 compute-0 ceph-mgr[75197]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 11:50:41 compute-0 ceph-mgr[75197]: [balancer INFO root] do_upmap
Nov 26 11:50:41 compute-0 ceph-mgr[75197]: [balancer INFO root] pools ['backups', '.mgr', 'cephfs.cephfs.meta', 'images', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control', 'vms', 'volumes', '.rgw.root', 'default.rgw.meta']
Nov 26 11:50:41 compute-0 ceph-mgr[75197]: [balancer INFO root] prepared 0/10 changes
Nov 26 11:50:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:50:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:50:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:50:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:50:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:50:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:50:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 11:50:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 11:50:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 11:50:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 11:50:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 11:50:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 11:50:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 11:50:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 11:50:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 11:50:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 11:50:41 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v464: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:50:41 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:50:41 compute-0 python3.9[225492]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 11:50:41 compute-0 systemd[1]: Reloading.
Nov 26 11:50:41 compute-0 systemd-rc-local-generator[225513]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:50:41 compute-0 systemd-sysv-generator[225516]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:50:41 compute-0 sudo[225490]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:42 compute-0 sudo[225679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlhexhwccspzyisyexjsrbqyrwvgddwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157841.9961152-395-76443802290941/AnsiballZ_stat.py'
Nov 26 11:50:42 compute-0 sudo[225679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:42 compute-0 python3.9[225681]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:50:42 compute-0 sudo[225679]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:42 compute-0 sudo[225757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljfukibtnsmdixagqqsrkdxzsgsnheiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157841.9961152-395-76443802290941/AnsiballZ_file.py'
Nov 26 11:50:42 compute-0 sudo[225757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:42 compute-0 python3.9[225759]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:50:42 compute-0 sudo[225757]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:42 compute-0 ceph-mon[74928]: pgmap v464: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:50:42 compute-0 sudo[225909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcmzwhlljrplrsryrgdncehomqxsysrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157842.7838702-407-18965271202098/AnsiballZ_stat.py'
Nov 26 11:50:42 compute-0 sudo[225909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:43 compute-0 python3.9[225911]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:50:43 compute-0 sudo[225909]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:43 compute-0 sudo[225987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vacgttwowzczdkjnifyhazwnyjybpsdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157842.7838702-407-18965271202098/AnsiballZ_file.py'
Nov 26 11:50:43 compute-0 sudo[225987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:43 compute-0 python3.9[225989]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:50:43 compute-0 sudo[225987]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:43 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v465: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:50:43 compute-0 sudo[226139]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqjirvbqrqukivooopfblvvgtvsppxpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157843.5456626-419-109873139974133/AnsiballZ_systemd.py'
Nov 26 11:50:43 compute-0 sudo[226139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:43 compute-0 python3.9[226141]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 11:50:43 compute-0 systemd[1]: Reloading.
Nov 26 11:50:44 compute-0 systemd-sysv-generator[226165]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:50:44 compute-0 systemd-rc-local-generator[226162]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:50:44 compute-0 systemd[1]: Starting Create netns directory...
Nov 26 11:50:44 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 26 11:50:44 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 26 11:50:44 compute-0 systemd[1]: Finished Create netns directory.
Nov 26 11:50:44 compute-0 sudo[226139]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:44 compute-0 sudo[226332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-suhcrzinpyelwbjuglrcbdjbdggoqnrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157844.5662558-429-263239734386669/AnsiballZ_file.py'
Nov 26 11:50:44 compute-0 sudo[226332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:44 compute-0 ceph-mon[74928]: pgmap v465: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:50:44 compute-0 python3.9[226334]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:50:44 compute-0 sudo[226332]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:45 compute-0 sudo[226484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlplfjgpzgsjpihwucxpzjzvmafdkzss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157845.0592618-437-48613849957548/AnsiballZ_stat.py'
Nov 26 11:50:45 compute-0 sudo[226484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:45 compute-0 python3.9[226486]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:50:45 compute-0 sudo[226484]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:45 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v466: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:50:45 compute-0 sudo[226607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cylbilvvlbospzpdgjghflgjmqwodrhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157845.0592618-437-48613849957548/AnsiballZ_copy.py'
Nov 26 11:50:45 compute-0 sudo[226607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:45 compute-0 python3.9[226609]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764157845.0592618-437-48613849957548/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:50:45 compute-0 sudo[226607]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:45 compute-0 podman[226610]: 2025-11-26 11:50:45.852239297 +0000 UTC m=+0.062577184 container health_status cfbeb1268b73bea59211ec8c025ee3818e11127880f6c0675b5fbea39bdd577e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:50:46 compute-0 sudo[226782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbcwujawfwwqvbdhtjcpaerakmawdcvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157846.0921667-454-219399333198272/AnsiballZ_file.py'
Nov 26 11:50:46 compute-0 sudo[226782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:46 compute-0 python3.9[226784]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:50:46 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:50:46 compute-0 sudo[226782]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:46 compute-0 ceph-mon[74928]: pgmap v466: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:50:46 compute-0 sudo[226934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iioovrwjhocjvyggtdjrxbguenwbtrlu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157846.719017-462-89385126479541/AnsiballZ_stat.py'
Nov 26 11:50:46 compute-0 sudo[226934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:47 compute-0 python3.9[226936]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:50:47 compute-0 sudo[226934]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:47 compute-0 sudo[227057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdxacafvtpkzerxshbianotkcchgjcbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157846.719017-462-89385126479541/AnsiballZ_copy.py'
Nov 26 11:50:47 compute-0 sudo[227057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:47 compute-0 python3.9[227059]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764157846.719017-462-89385126479541/.source.json _original_basename=.36cc7iuu follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:50:47 compute-0 sudo[227057]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:47 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v467: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:50:47 compute-0 sudo[227209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrlrklfqeevayfbvgaiejcwazacwimem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157847.575125-477-42732856206877/AnsiballZ_file.py'
Nov 26 11:50:47 compute-0 sudo[227209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:47 compute-0 python3.9[227211]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:50:47 compute-0 sudo[227209]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:48 compute-0 sudo[227361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arrerkdltywpnakrfmydbaxwswewxljm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157848.0857353-485-216284898129119/AnsiballZ_stat.py'
Nov 26 11:50:48 compute-0 sudo[227361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:48 compute-0 sudo[227361]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:48 compute-0 sudo[227484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rubkksnedaqefblejksqqttuvraknlkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157848.0857353-485-216284898129119/AnsiballZ_copy.py'
Nov 26 11:50:48 compute-0 sudo[227484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:48 compute-0 ceph-mon[74928]: pgmap v467: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:50:48 compute-0 sudo[227484]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:49 compute-0 sudo[227636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewugnfaetmngrbonnumtnjdketizzsem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157849.1129763-502-174100490936509/AnsiballZ_container_config_data.py'
Nov 26 11:50:49 compute-0 sudo[227636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:49 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v468: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:50:49 compute-0 python3.9[227638]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Nov 26 11:50:49 compute-0 sudo[227636]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:50 compute-0 sudo[227788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqjowkuqxpsxkesqrkfzoogbbzrszvqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157849.7778099-511-193536990906650/AnsiballZ_container_config_hash.py'
Nov 26 11:50:50 compute-0 sudo[227788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:50 compute-0 python3.9[227790]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 26 11:50:50 compute-0 sudo[227788]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 11:50:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:50:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 11:50:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:50:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:50:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:50:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:50:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:50:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:50:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:50:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:50:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:50:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 26 11:50:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:50:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:50:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:50:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 11:50:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:50:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 11:50:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:50:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:50:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:50:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 11:50:50 compute-0 ceph-mon[74928]: pgmap v468: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:50:50 compute-0 sudo[227940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtimcygmylbtivhfwbhskzizklyggmsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157850.4706066-520-103710153127599/AnsiballZ_podman_container_info.py'
Nov 26 11:50:50 compute-0 sudo[227940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:50 compute-0 python3.9[227942]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 26 11:50:51 compute-0 sudo[227940]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:51 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v469: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:50:51 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:50:51 compute-0 sudo[228111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zldysyccfllnltbxigkcfvdojcglwftb ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764157851.572899-533-253262644688490/AnsiballZ_edpm_container_manage.py'
Nov 26 11:50:51 compute-0 sudo[228111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:52 compute-0 python3[228113]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 26 11:50:52 compute-0 ceph-mon[74928]: pgmap v469: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:50:53 compute-0 podman[228124]: 2025-11-26 11:50:53.454004888 +0000 UTC m=+1.269472647 image pull 5a87eb2d1bea5c4c3bce654551fc0b05a96cf5556b36110e17bddeee8189b072 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24
Nov 26 11:50:53 compute-0 podman[228170]: 2025-11-26 11:50:53.545657157 +0000 UTC m=+0.028542853 container create b46e4f533fc070f3c487ba2bec68d1eecba141ffd4a1dcc9b4c347110dd51e0e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd)
Nov 26 11:50:53 compute-0 podman[228170]: 2025-11-26 11:50:53.532894275 +0000 UTC m=+0.015779971 image pull 5a87eb2d1bea5c4c3bce654551fc0b05a96cf5556b36110e17bddeee8189b072 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24
Nov 26 11:50:53 compute-0 python3[228113]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24
Nov 26 11:50:53 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v470: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:50:53 compute-0 sudo[228111]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:53 compute-0 sudo[228348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yajauqjxdgbyvdypqktoophtbufqjgix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157853.767797-541-137121906765319/AnsiballZ_stat.py'
Nov 26 11:50:53 compute-0 sudo[228348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:54 compute-0 python3.9[228350]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 11:50:54 compute-0 sudo[228348]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:54 compute-0 sudo[228502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcjhdqoetzegrazsiressunuimhenhwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157854.3107307-550-222886889078676/AnsiballZ_file.py'
Nov 26 11:50:54 compute-0 sudo[228502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:54 compute-0 python3.9[228504]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:50:54 compute-0 sudo[228502]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:54 compute-0 ceph-mon[74928]: pgmap v470: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:50:54 compute-0 sudo[228578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdubiahkimjjxnirmiscvsjiqunuezlv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157854.3107307-550-222886889078676/AnsiballZ_stat.py'
Nov 26 11:50:54 compute-0 sudo[228578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:54 compute-0 python3.9[228580]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 11:50:54 compute-0 sudo[228578]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:55 compute-0 sudo[228729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahcjdysqpqyyeybzaaixuumvbnhnelez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157855.0095594-550-67258804838219/AnsiballZ_copy.py'
Nov 26 11:50:55 compute-0 sudo[228729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:55 compute-0 python3.9[228731]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764157855.0095594-550-67258804838219/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:50:55 compute-0 sudo[228729]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:55 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v471: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:50:55 compute-0 sudo[228805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivnqoirmoxeyrmgedxsjoircjqhlpvip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157855.0095594-550-67258804838219/AnsiballZ_systemd.py'
Nov 26 11:50:55 compute-0 sudo[228805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:55 compute-0 python3.9[228807]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 26 11:50:55 compute-0 systemd[1]: Reloading.
Nov 26 11:50:55 compute-0 systemd-sysv-generator[228833]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:50:55 compute-0 systemd-rc-local-generator[228830]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:50:56 compute-0 sudo[228805]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:56 compute-0 sudo[228916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sovwlkzjhwutdgsnbsvodrkldsuziefj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157855.0095594-550-67258804838219/AnsiballZ_systemd.py'
Nov 26 11:50:56 compute-0 sudo[228916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:56 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:50:56 compute-0 python3.9[228918]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 11:50:56 compute-0 systemd[1]: Reloading.
Nov 26 11:50:56 compute-0 systemd-sysv-generator[228944]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:50:56 compute-0 systemd-rc-local-generator[228941]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:50:56 compute-0 ceph-mon[74928]: pgmap v471: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:50:56 compute-0 systemd[1]: Starting multipathd container...
Nov 26 11:50:56 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:50:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa034b5f4faae7e80243573397e4293e195df3194ede76b3f18ac8e7e9589eaf/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 26 11:50:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa034b5f4faae7e80243573397e4293e195df3194ede76b3f18ac8e7e9589eaf/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 26 11:50:57 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run b46e4f533fc070f3c487ba2bec68d1eecba141ffd4a1dcc9b4c347110dd51e0e.
Nov 26 11:50:57 compute-0 podman[228957]: 2025-11-26 11:50:57.021342603 +0000 UTC m=+0.082772522 container init b46e4f533fc070f3c487ba2bec68d1eecba141ffd4a1dcc9b4c347110dd51e0e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 26 11:50:57 compute-0 multipathd[228969]: + sudo -E kolla_set_configs
Nov 26 11:50:57 compute-0 podman[228957]: 2025-11-26 11:50:57.041415 +0000 UTC m=+0.102844909 container start b46e4f533fc070f3c487ba2bec68d1eecba141ffd4a1dcc9b4c347110dd51e0e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 11:50:57 compute-0 podman[228957]: multipathd
Nov 26 11:50:57 compute-0 systemd[1]: Started multipathd container.
Nov 26 11:50:57 compute-0 sudo[228975]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Nov 26 11:50:57 compute-0 sudo[228975]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 26 11:50:57 compute-0 sudo[228975]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 26 11:50:57 compute-0 sudo[228916]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:57 compute-0 multipathd[228969]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 26 11:50:57 compute-0 multipathd[228969]: INFO:__main__:Validating config file
Nov 26 11:50:57 compute-0 multipathd[228969]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 26 11:50:57 compute-0 multipathd[228969]: INFO:__main__:Writing out command to execute
Nov 26 11:50:57 compute-0 podman[228976]: 2025-11-26 11:50:57.100231639 +0000 UTC m=+0.043711131 container health_status b46e4f533fc070f3c487ba2bec68d1eecba141ffd4a1dcc9b4c347110dd51e0e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 26 11:50:57 compute-0 systemd[1]: b46e4f533fc070f3c487ba2bec68d1eecba141ffd4a1dcc9b4c347110dd51e0e-5aa4b094231eb6ff.service: Main process exited, code=exited, status=1/FAILURE
Nov 26 11:50:57 compute-0 systemd[1]: b46e4f533fc070f3c487ba2bec68d1eecba141ffd4a1dcc9b4c347110dd51e0e-5aa4b094231eb6ff.service: Failed with result 'exit-code'.
Nov 26 11:50:57 compute-0 sudo[228975]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:57 compute-0 multipathd[228969]: ++ cat /run_command
Nov 26 11:50:57 compute-0 multipathd[228969]: + CMD='/usr/sbin/multipathd -d'
Nov 26 11:50:57 compute-0 multipathd[228969]: + ARGS=
Nov 26 11:50:57 compute-0 multipathd[228969]: + sudo kolla_copy_cacerts
Nov 26 11:50:57 compute-0 sudo[229016]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Nov 26 11:50:57 compute-0 sudo[229016]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 26 11:50:57 compute-0 sudo[229016]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 26 11:50:57 compute-0 sudo[229016]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:57 compute-0 multipathd[228969]: + [[ ! -n '' ]]
Nov 26 11:50:57 compute-0 multipathd[228969]: + . kolla_extend_start
Nov 26 11:50:57 compute-0 multipathd[228969]: Running command: '/usr/sbin/multipathd -d'
Nov 26 11:50:57 compute-0 multipathd[228969]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 26 11:50:57 compute-0 multipathd[228969]: + umask 0022
Nov 26 11:50:57 compute-0 multipathd[228969]: + exec /usr/sbin/multipathd -d
Nov 26 11:50:57 compute-0 multipathd[228969]: 2630.484826 | --------start up--------
Nov 26 11:50:57 compute-0 multipathd[228969]: 2630.484837 | read /etc/multipath.conf
Nov 26 11:50:57 compute-0 multipathd[228969]: 2630.488943 | path checkers start up
Nov 26 11:50:57 compute-0 python3.9[229156]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 11:50:57 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v472: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:50:57 compute-0 sudo[229308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tighasyrqdywpnavriglolumlcjozpfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157857.6621318-586-258951145309204/AnsiballZ_command.py'
Nov 26 11:50:57 compute-0 sudo[229308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:57 compute-0 python3.9[229310]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:50:58 compute-0 sudo[229308]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:58 compute-0 sudo[229469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcwndbbaxjomxrmwrvhindthttvtrgrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157858.1667867-594-236238198363123/AnsiballZ_systemd.py'
Nov 26 11:50:58 compute-0 sudo[229469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:58 compute-0 python3.9[229471]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 11:50:58 compute-0 systemd[1]: Stopping multipathd container...
Nov 26 11:50:58 compute-0 multipathd[228969]: 2632.007675 | exit (signal)
Nov 26 11:50:58 compute-0 multipathd[228969]: 2632.007727 | --------shut down-------
Nov 26 11:50:58 compute-0 systemd[1]: libpod-b46e4f533fc070f3c487ba2bec68d1eecba141ffd4a1dcc9b4c347110dd51e0e.scope: Deactivated successfully.
Nov 26 11:50:58 compute-0 podman[229475]: 2025-11-26 11:50:58.688232918 +0000 UTC m=+0.053931387 container died b46e4f533fc070f3c487ba2bec68d1eecba141ffd4a1dcc9b4c347110dd51e0e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 26 11:50:58 compute-0 systemd[1]: b46e4f533fc070f3c487ba2bec68d1eecba141ffd4a1dcc9b4c347110dd51e0e-5aa4b094231eb6ff.timer: Deactivated successfully.
Nov 26 11:50:58 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run b46e4f533fc070f3c487ba2bec68d1eecba141ffd4a1dcc9b4c347110dd51e0e.
Nov 26 11:50:58 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b46e4f533fc070f3c487ba2bec68d1eecba141ffd4a1dcc9b4c347110dd51e0e-userdata-shm.mount: Deactivated successfully.
Nov 26 11:50:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa034b5f4faae7e80243573397e4293e195df3194ede76b3f18ac8e7e9589eaf-merged.mount: Deactivated successfully.
Nov 26 11:50:58 compute-0 ceph-mon[74928]: pgmap v472: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:50:58 compute-0 podman[229475]: 2025-11-26 11:50:58.766533725 +0000 UTC m=+0.132232194 container cleanup b46e4f533fc070f3c487ba2bec68d1eecba141ffd4a1dcc9b4c347110dd51e0e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:50:58 compute-0 podman[229475]: multipathd
Nov 26 11:50:58 compute-0 podman[229496]: multipathd
Nov 26 11:50:58 compute-0 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Nov 26 11:50:58 compute-0 systemd[1]: Stopped multipathd container.
Nov 26 11:50:58 compute-0 systemd[1]: Starting multipathd container...
Nov 26 11:50:58 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:50:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa034b5f4faae7e80243573397e4293e195df3194ede76b3f18ac8e7e9589eaf/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 26 11:50:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa034b5f4faae7e80243573397e4293e195df3194ede76b3f18ac8e7e9589eaf/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 26 11:50:58 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run b46e4f533fc070f3c487ba2bec68d1eecba141ffd4a1dcc9b4c347110dd51e0e.
Nov 26 11:50:58 compute-0 podman[229505]: 2025-11-26 11:50:58.914347746 +0000 UTC m=+0.084209153 container init b46e4f533fc070f3c487ba2bec68d1eecba141ffd4a1dcc9b4c347110dd51e0e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 26 11:50:58 compute-0 multipathd[229517]: + sudo -E kolla_set_configs
Nov 26 11:50:58 compute-0 sudo[229523]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Nov 26 11:50:58 compute-0 sudo[229523]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 26 11:50:58 compute-0 sudo[229523]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 26 11:50:58 compute-0 podman[229505]: 2025-11-26 11:50:58.939179279 +0000 UTC m=+0.109040665 container start b46e4f533fc070f3c487ba2bec68d1eecba141ffd4a1dcc9b4c347110dd51e0e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 26 11:50:58 compute-0 podman[229505]: multipathd
Nov 26 11:50:58 compute-0 systemd[1]: Started multipathd container.
Nov 26 11:50:58 compute-0 sudo[229469]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:58 compute-0 multipathd[229517]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 26 11:50:58 compute-0 multipathd[229517]: INFO:__main__:Validating config file
Nov 26 11:50:58 compute-0 multipathd[229517]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 26 11:50:58 compute-0 multipathd[229517]: INFO:__main__:Writing out command to execute
Nov 26 11:50:58 compute-0 sudo[229523]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:58 compute-0 multipathd[229517]: ++ cat /run_command
Nov 26 11:50:58 compute-0 multipathd[229517]: + CMD='/usr/sbin/multipathd -d'
Nov 26 11:50:58 compute-0 multipathd[229517]: + ARGS=
Nov 26 11:50:58 compute-0 multipathd[229517]: + sudo kolla_copy_cacerts
Nov 26 11:50:58 compute-0 podman[229524]: 2025-11-26 11:50:58.98714885 +0000 UTC m=+0.044592433 container health_status b46e4f533fc070f3c487ba2bec68d1eecba141ffd4a1dcc9b4c347110dd51e0e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd)
Nov 26 11:50:58 compute-0 systemd[1]: b46e4f533fc070f3c487ba2bec68d1eecba141ffd4a1dcc9b4c347110dd51e0e-b7182378bb98606.service: Main process exited, code=exited, status=1/FAILURE
Nov 26 11:50:58 compute-0 systemd[1]: b46e4f533fc070f3c487ba2bec68d1eecba141ffd4a1dcc9b4c347110dd51e0e-b7182378bb98606.service: Failed with result 'exit-code'.
Nov 26 11:50:58 compute-0 sudo[229544]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Nov 26 11:50:58 compute-0 sudo[229544]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 26 11:50:58 compute-0 sudo[229544]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 26 11:50:58 compute-0 sudo[229544]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:58 compute-0 multipathd[229517]: + [[ ! -n '' ]]
Nov 26 11:50:58 compute-0 multipathd[229517]: + . kolla_extend_start
Nov 26 11:50:58 compute-0 multipathd[229517]: Running command: '/usr/sbin/multipathd -d'
Nov 26 11:50:58 compute-0 multipathd[229517]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 26 11:50:58 compute-0 multipathd[229517]: + umask 0022
Nov 26 11:50:58 compute-0 multipathd[229517]: + exec /usr/sbin/multipathd -d
Nov 26 11:50:59 compute-0 multipathd[229517]: 2632.361055 | --------start up--------
Nov 26 11:50:59 compute-0 multipathd[229517]: 2632.361068 | read /etc/multipath.conf
Nov 26 11:50:59 compute-0 multipathd[229517]: 2632.364844 | path checkers start up
Nov 26 11:50:59 compute-0 sudo[229703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzaucifbczpjbkyifyddsufqlnvulqnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157859.1129937-602-104170581232173/AnsiballZ_file.py'
Nov 26 11:50:59 compute-0 sudo[229703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:50:59 compute-0 python3.9[229705]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:50:59 compute-0 sudo[229703]: pam_unix(sudo:session): session closed for user root
Nov 26 11:50:59 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v473: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:51:00 compute-0 sudo[229855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idvybhiauccdzxnwhraetxioyaamdnjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157859.7710104-614-131260964299630/AnsiballZ_file.py'
Nov 26 11:51:00 compute-0 sudo[229855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:51:00 compute-0 python3.9[229857]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 26 11:51:00 compute-0 sudo[229855]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:00 compute-0 sudo[230007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqzjpsckvnmlnhqbubavrzqbfahndbtc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157860.350878-622-259519285721171/AnsiballZ_modprobe.py'
Nov 26 11:51:00 compute-0 sudo[230007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:51:00 compute-0 python3.9[230009]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Nov 26 11:51:00 compute-0 kernel: Key type psk registered
Nov 26 11:51:00 compute-0 sudo[230007]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:00 compute-0 ceph-mon[74928]: pgmap v473: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:51:01 compute-0 sudo[230171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqjagfjhkhspozietprcrfgymdszqszr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157860.865075-630-85704070529074/AnsiballZ_stat.py'
Nov 26 11:51:01 compute-0 sudo[230171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:51:01 compute-0 python3.9[230173]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:51:01 compute-0 sudo[230171]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:01 compute-0 sudo[230294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmjcttzkgclrtbocizslehjjquroqplg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157860.865075-630-85704070529074/AnsiballZ_copy.py'
Nov 26 11:51:01 compute-0 sudo[230294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:51:01 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v474: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:51:01 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:51:01 compute-0 python3.9[230296]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764157860.865075-630-85704070529074/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:51:01 compute-0 sudo[230294]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:01 compute-0 sudo[230446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hskxvnpssxiglrnzkhtiqkugenwuvwjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157861.7761326-646-81385821131655/AnsiballZ_lineinfile.py'
Nov 26 11:51:01 compute-0 sudo[230446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:51:02 compute-0 python3.9[230448]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:51:02 compute-0 sudo[230446]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:02 compute-0 sudo[230598]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmzlqceoltawtfmkbuxagvijbwhfbdac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157862.265303-654-143237530187635/AnsiballZ_systemd.py'
Nov 26 11:51:02 compute-0 sudo[230598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:51:02 compute-0 sudo[230601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:51:02 compute-0 sudo[230601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:51:02 compute-0 sudo[230601]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:02 compute-0 sudo[230626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:51:02 compute-0 sudo[230626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:51:02 compute-0 sudo[230626]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:02 compute-0 python3.9[230600]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 11:51:02 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 26 11:51:02 compute-0 systemd[1]: Stopped Load Kernel Modules.
Nov 26 11:51:02 compute-0 systemd[1]: Stopping Load Kernel Modules...
Nov 26 11:51:02 compute-0 systemd[1]: Starting Load Kernel Modules...
Nov 26 11:51:02 compute-0 sudo[230652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:51:02 compute-0 systemd[1]: Finished Load Kernel Modules.
Nov 26 11:51:02 compute-0 sudo[230652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:51:02 compute-0 sudo[230652]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:02 compute-0 ceph-mon[74928]: pgmap v474: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:51:02 compute-0 sudo[230598]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:02 compute-0 sudo[230680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 26 11:51:02 compute-0 sudo[230680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:51:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:51:02.984 159928 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 11:51:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:51:02.985 159928 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 11:51:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:51:02.985 159928 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 11:51:03 compute-0 podman[230866]: 2025-11-26 11:51:03.133279225 +0000 UTC m=+0.040697581 container exec 810eaed6cbde00330c0646cedaef5c2ae94579236d67ba42b8ef02d055d04ad5 (image=quay.io/ceph/ceph:v18, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Nov 26 11:51:03 compute-0 sudo[230925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdskfksqaopeadwlyecdzwtsydiuddur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157862.951596-662-93064180894753/AnsiballZ_dnf.py'
Nov 26 11:51:03 compute-0 sudo[230925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:51:03 compute-0 podman[230866]: 2025-11-26 11:51:03.238387909 +0000 UTC m=+0.145806264 container exec_died 810eaed6cbde00330c0646cedaef5c2ae94579236d67ba42b8ef02d055d04ad5 (image=quay.io/ceph/ceph:v18, name=ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:51:03 compute-0 python3.9[230927]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 11:51:03 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v475: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:51:03 compute-0 sudo[230680]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:03 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 11:51:03 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:51:03 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 11:51:03 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:51:03 compute-0 sudo[231041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:51:03 compute-0 sudo[231041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:51:03 compute-0 sudo[231041]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:03 compute-0 sudo[231066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:51:03 compute-0 sudo[231066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:51:03 compute-0 sudo[231066]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:03 compute-0 sudo[231091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:51:03 compute-0 sudo[231091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:51:03 compute-0 sudo[231091]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:03 compute-0 sudo[231116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 26 11:51:03 compute-0 sudo[231116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:51:04 compute-0 sudo[231116]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:04 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 11:51:04 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:51:04 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 11:51:04 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 11:51:04 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 11:51:04 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:51:04 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 865d6b76-b5b9-4747-9c02-5b84ad5beaef does not exist
Nov 26 11:51:04 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 98a6ab8e-7376-43cb-a1b7-d65707ba692b does not exist
Nov 26 11:51:04 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 384b3039-4960-4ad3-b475-ec5e141fe946 does not exist
Nov 26 11:51:04 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 11:51:04 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 11:51:04 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 11:51:04 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 11:51:04 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 11:51:04 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:51:04 compute-0 sudo[231169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:51:04 compute-0 sudo[231169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:51:04 compute-0 sudo[231169]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:04 compute-0 sudo[231194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:51:04 compute-0 sudo[231194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:51:04 compute-0 sudo[231194]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:04 compute-0 sudo[231219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:51:04 compute-0 sudo[231219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:51:04 compute-0 sudo[231219]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:04 compute-0 sudo[231244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 26 11:51:04 compute-0 sudo[231244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:51:04 compute-0 podman[231302]: 2025-11-26 11:51:04.574262678 +0000 UTC m=+0.030528104 container create d4266fe468d97c151680e7fb17de6969073e6e81d63ead08b63b0cfc403e42f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ganguly, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 26 11:51:04 compute-0 systemd[1]: Started libpod-conmon-d4266fe468d97c151680e7fb17de6969073e6e81d63ead08b63b0cfc403e42f6.scope.
Nov 26 11:51:04 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:51:04 compute-0 podman[231302]: 2025-11-26 11:51:04.637928896 +0000 UTC m=+0.094194352 container init d4266fe468d97c151680e7fb17de6969073e6e81d63ead08b63b0cfc403e42f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:51:04 compute-0 podman[231302]: 2025-11-26 11:51:04.642994559 +0000 UTC m=+0.099259995 container start d4266fe468d97c151680e7fb17de6969073e6e81d63ead08b63b0cfc403e42f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ganguly, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:51:04 compute-0 podman[231302]: 2025-11-26 11:51:04.645739455 +0000 UTC m=+0.102004911 container attach d4266fe468d97c151680e7fb17de6969073e6e81d63ead08b63b0cfc403e42f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ganguly, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 26 11:51:04 compute-0 confident_ganguly[231316]: 167 167
Nov 26 11:51:04 compute-0 systemd[1]: libpod-d4266fe468d97c151680e7fb17de6969073e6e81d63ead08b63b0cfc403e42f6.scope: Deactivated successfully.
Nov 26 11:51:04 compute-0 podman[231302]: 2025-11-26 11:51:04.646508525 +0000 UTC m=+0.102773981 container died d4266fe468d97c151680e7fb17de6969073e6e81d63ead08b63b0cfc403e42f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ganguly, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 26 11:51:04 compute-0 ceph-mon[74928]: pgmap v475: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:51:04 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:51:04 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:51:04 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:51:04 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 11:51:04 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:51:04 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 11:51:04 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 11:51:04 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:51:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a67fbfb4be0398dcd223f7f519e112998eba797219c28d8ca1e40a172af9ac9-merged.mount: Deactivated successfully.
Nov 26 11:51:04 compute-0 podman[231302]: 2025-11-26 11:51:04.562907416 +0000 UTC m=+0.019172872 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:51:04 compute-0 podman[231302]: 2025-11-26 11:51:04.666073758 +0000 UTC m=+0.122339193 container remove d4266fe468d97c151680e7fb17de6969073e6e81d63ead08b63b0cfc403e42f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ganguly, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Nov 26 11:51:04 compute-0 systemd[1]: libpod-conmon-d4266fe468d97c151680e7fb17de6969073e6e81d63ead08b63b0cfc403e42f6.scope: Deactivated successfully.
Nov 26 11:51:04 compute-0 podman[231339]: 2025-11-26 11:51:04.782570824 +0000 UTC m=+0.028202089 container create cab8fb84d5c33db771413049499773683ffb88dc9dcb02df225da176c7b30374 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_jennings, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 26 11:51:04 compute-0 systemd[1]: Started libpod-conmon-cab8fb84d5c33db771413049499773683ffb88dc9dcb02df225da176c7b30374.scope.
Nov 26 11:51:04 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:51:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59b8fbcc5ea46067eea81dc6d657b2d4da3dba9a7331fd9972cf520bf0e4952e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:51:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59b8fbcc5ea46067eea81dc6d657b2d4da3dba9a7331fd9972cf520bf0e4952e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:51:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59b8fbcc5ea46067eea81dc6d657b2d4da3dba9a7331fd9972cf520bf0e4952e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:51:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59b8fbcc5ea46067eea81dc6d657b2d4da3dba9a7331fd9972cf520bf0e4952e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:51:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59b8fbcc5ea46067eea81dc6d657b2d4da3dba9a7331fd9972cf520bf0e4952e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 11:51:04 compute-0 podman[231339]: 2025-11-26 11:51:04.845178077 +0000 UTC m=+0.090809331 container init cab8fb84d5c33db771413049499773683ffb88dc9dcb02df225da176c7b30374 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_jennings, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 26 11:51:04 compute-0 podman[231339]: 2025-11-26 11:51:04.849684985 +0000 UTC m=+0.095316240 container start cab8fb84d5c33db771413049499773683ffb88dc9dcb02df225da176c7b30374 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_jennings, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 26 11:51:04 compute-0 podman[231339]: 2025-11-26 11:51:04.850813893 +0000 UTC m=+0.096445148 container attach cab8fb84d5c33db771413049499773683ffb88dc9dcb02df225da176c7b30374 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_jennings, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:51:04 compute-0 podman[231339]: 2025-11-26 11:51:04.771316473 +0000 UTC m=+0.016947728 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:51:05 compute-0 systemd[1]: Reloading.
Nov 26 11:51:05 compute-0 podman[231359]: 2025-11-26 11:51:05.248071593 +0000 UTC m=+0.054125039 container health_status 5f401b83ca5f86ed33da7d010b1da1561564f07ce95b2e756c93a36d59e54803 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 26 11:51:05 compute-0 systemd-sysv-generator[231397]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:51:05 compute-0 systemd-rc-local-generator[231394]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:51:05 compute-0 systemd[1]: Reloading.
Nov 26 11:51:05 compute-0 systemd-sysv-generator[231446]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:51:05 compute-0 systemd-rc-local-generator[231442]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:51:05 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v476: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:51:05 compute-0 determined_jennings[231352]: --> passed data devices: 0 physical, 3 LVM
Nov 26 11:51:05 compute-0 determined_jennings[231352]: --> relative data size: 1.0
Nov 26 11:51:05 compute-0 determined_jennings[231352]: --> All data devices are unavailable
Nov 26 11:51:05 compute-0 podman[231339]: 2025-11-26 11:51:05.667985554 +0000 UTC m=+0.913616819 container died cab8fb84d5c33db771413049499773683ffb88dc9dcb02df225da176c7b30374 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 26 11:51:05 compute-0 systemd[1]: libpod-cab8fb84d5c33db771413049499773683ffb88dc9dcb02df225da176c7b30374.scope: Deactivated successfully.
Nov 26 11:51:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-59b8fbcc5ea46067eea81dc6d657b2d4da3dba9a7331fd9972cf520bf0e4952e-merged.mount: Deactivated successfully.
Nov 26 11:51:05 compute-0 podman[231339]: 2025-11-26 11:51:05.752969069 +0000 UTC m=+0.998600324 container remove cab8fb84d5c33db771413049499773683ffb88dc9dcb02df225da176c7b30374 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_jennings, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:51:05 compute-0 systemd[1]: libpod-conmon-cab8fb84d5c33db771413049499773683ffb88dc9dcb02df225da176c7b30374.scope: Deactivated successfully.
Nov 26 11:51:05 compute-0 sudo[231244]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:05 compute-0 sudo[231481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:51:05 compute-0 sudo[231481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:51:05 compute-0 sudo[231481]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:05 compute-0 systemd-logind[744]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 26 11:51:05 compute-0 sudo[231506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:51:05 compute-0 sudo[231506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:51:05 compute-0 sudo[231506]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:05 compute-0 systemd-logind[744]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 26 11:51:05 compute-0 sudo[231558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:51:05 compute-0 sudo[231558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:51:05 compute-0 sudo[231558]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:05 compute-0 lvm[231590]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 26 11:51:05 compute-0 lvm[231590]: VG ceph_vg1 finished
Nov 26 11:51:05 compute-0 lvm[231589]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 26 11:51:05 compute-0 lvm[231589]: VG ceph_vg0 finished
Nov 26 11:51:06 compute-0 lvm[231591]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 26 11:51:06 compute-0 lvm[231591]: VG ceph_vg2 finished
Nov 26 11:51:06 compute-0 sudo[231592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -- lvm list --format json
Nov 26 11:51:06 compute-0 sudo[231592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:51:06 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 26 11:51:06 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 26 11:51:06 compute-0 systemd[1]: Reloading.
Nov 26 11:51:06 compute-0 systemd-sysv-generator[231678]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:51:06 compute-0 systemd-rc-local-generator[231675]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:51:06 compute-0 podman[231738]: 2025-11-26 11:51:06.327674596 +0000 UTC m=+0.031687540 container create fe7de42fd21a4d2d157c33f2de8d57083786f7f3acdedc6f39758bcd2448c779 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_shockley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 26 11:51:06 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 26 11:51:06 compute-0 systemd[1]: Started libpod-conmon-fe7de42fd21a4d2d157c33f2de8d57083786f7f3acdedc6f39758bcd2448c779.scope.
Nov 26 11:51:06 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:51:06 compute-0 podman[231738]: 2025-11-26 11:51:06.314219083 +0000 UTC m=+0.018232038 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:51:06 compute-0 podman[231738]: 2025-11-26 11:51:06.418860415 +0000 UTC m=+0.122873371 container init fe7de42fd21a4d2d157c33f2de8d57083786f7f3acdedc6f39758bcd2448c779 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_shockley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:51:06 compute-0 podman[231738]: 2025-11-26 11:51:06.424307265 +0000 UTC m=+0.128320211 container start fe7de42fd21a4d2d157c33f2de8d57083786f7f3acdedc6f39758bcd2448c779 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_shockley, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:51:06 compute-0 podman[231738]: 2025-11-26 11:51:06.427274251 +0000 UTC m=+0.131287216 container attach fe7de42fd21a4d2d157c33f2de8d57083786f7f3acdedc6f39758bcd2448c779 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_shockley, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 26 11:51:06 compute-0 upbeat_shockley[231874]: 167 167
Nov 26 11:51:06 compute-0 systemd[1]: libpod-fe7de42fd21a4d2d157c33f2de8d57083786f7f3acdedc6f39758bcd2448c779.scope: Deactivated successfully.
Nov 26 11:51:06 compute-0 podman[231738]: 2025-11-26 11:51:06.428011751 +0000 UTC m=+0.132024695 container died fe7de42fd21a4d2d157c33f2de8d57083786f7f3acdedc6f39758bcd2448c779 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_shockley, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Nov 26 11:51:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-19b57b8c1f11ff09c47436140f04e1a59bff1acb09d1c0aaf10b3330c96c3c9c-merged.mount: Deactivated successfully.
Nov 26 11:51:06 compute-0 podman[231738]: 2025-11-26 11:51:06.446794077 +0000 UTC m=+0.150807022 container remove fe7de42fd21a4d2d157c33f2de8d57083786f7f3acdedc6f39758bcd2448c779 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_shockley, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 26 11:51:06 compute-0 systemd[1]: libpod-conmon-fe7de42fd21a4d2d157c33f2de8d57083786f7f3acdedc6f39758bcd2448c779.scope: Deactivated successfully.
Nov 26 11:51:06 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:51:06 compute-0 podman[232074]: 2025-11-26 11:51:06.578070732 +0000 UTC m=+0.032981992 container create fbcde9e61142a167dffb224cccddc6441114b82b65e3c6a4864dc7557fd45d79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 26 11:51:06 compute-0 systemd[1]: Started libpod-conmon-fbcde9e61142a167dffb224cccddc6441114b82b65e3c6a4864dc7557fd45d79.scope.
Nov 26 11:51:06 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:51:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/674011513fb36a34db0e207f489411340de6e8e6e08e96ba0bdc1957ad1fb051/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:51:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/674011513fb36a34db0e207f489411340de6e8e6e08e96ba0bdc1957ad1fb051/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:51:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/674011513fb36a34db0e207f489411340de6e8e6e08e96ba0bdc1957ad1fb051/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:51:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/674011513fb36a34db0e207f489411340de6e8e6e08e96ba0bdc1957ad1fb051/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:51:06 compute-0 podman[232074]: 2025-11-26 11:51:06.63595501 +0000 UTC m=+0.090866270 container init fbcde9e61142a167dffb224cccddc6441114b82b65e3c6a4864dc7557fd45d79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 26 11:51:06 compute-0 podman[232074]: 2025-11-26 11:51:06.640618352 +0000 UTC m=+0.095529602 container start fbcde9e61142a167dffb224cccddc6441114b82b65e3c6a4864dc7557fd45d79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 26 11:51:06 compute-0 podman[232074]: 2025-11-26 11:51:06.641609101 +0000 UTC m=+0.096520351 container attach fbcde9e61142a167dffb224cccddc6441114b82b65e3c6a4864dc7557fd45d79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_pascal, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:51:06 compute-0 ceph-mon[74928]: pgmap v476: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:51:06 compute-0 podman[232074]: 2025-11-26 11:51:06.566157058 +0000 UTC m=+0.021068318 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:51:06 compute-0 sudo[230925]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:07 compute-0 sudo[232969]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccazakspntdsxikvbwsebzkzfaxnhxyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157866.8400502-670-62862959777081/AnsiballZ_systemd_service.py'
Nov 26 11:51:07 compute-0 sudo[232969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:51:07 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 26 11:51:07 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 26 11:51:07 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.059s CPU time.
Nov 26 11:51:07 compute-0 systemd[1]: run-r1c7d7fb113db49d8b9a5b781086e40bf.service: Deactivated successfully.
Nov 26 11:51:07 compute-0 quirky_pascal[232179]: {
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:     "0": [
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:         {
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:             "devices": [
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:                 "/dev/loop3"
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:             ],
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:             "lv_name": "ceph_lv0",
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:             "lv_size": "21470642176",
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a9ad59a0-aa2e-4d92-b571-519d2d145b6a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:             "lv_uuid": "Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ",
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:             "name": "ceph_lv0",
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:             "tags": {
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:                 "ceph.block_uuid": "Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ",
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:                 "ceph.cluster_name": "ceph",
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:                 "ceph.crush_device_class": "",
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:                 "ceph.encrypted": "0",
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:                 "ceph.osd_fsid": "a9ad59a0-aa2e-4d92-b571-519d2d145b6a",
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:                 "ceph.osd_id": "0",
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:                 "ceph.type": "block",
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:                 "ceph.vdo": "0"
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:             },
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:             "type": "block",
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:             "vg_name": "ceph_vg0"
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:         }
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:     ],
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:     "1": [
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:         {
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:             "devices": [
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:                 "/dev/loop4"
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:             ],
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:             "lv_name": "ceph_lv1",
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:             "lv_size": "21470642176",
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2627095b-eef8-4027-bfef-68bf7cb6801f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:             "lv_uuid": "T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS",
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:             "name": "ceph_lv1",
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:             "tags": {
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:                 "ceph.block_uuid": "T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS",
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:                 "ceph.cluster_name": "ceph",
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:                 "ceph.crush_device_class": "",
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:                 "ceph.encrypted": "0",
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:                 "ceph.osd_fsid": "2627095b-eef8-4027-bfef-68bf7cb6801f",
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:                 "ceph.osd_id": "1",
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:                 "ceph.type": "block",
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:                 "ceph.vdo": "0"
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:             },
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:             "type": "block",
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:             "vg_name": "ceph_vg1"
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:         }
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:     ],
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:     "2": [
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:         {
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:             "devices": [
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:                 "/dev/loop5"
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:             ],
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:             "lv_name": "ceph_lv2",
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:             "lv_size": "21470642176",
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d56156fb-7361-4bef-b06b-1320109b4323,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:             "lv_uuid": "PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF",
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:             "name": "ceph_lv2",
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:             "tags": {
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:                 "ceph.block_uuid": "PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF",
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:                 "ceph.cluster_name": "ceph",
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:                 "ceph.crush_device_class": "",
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:                 "ceph.encrypted": "0",
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:                 "ceph.osd_fsid": "d56156fb-7361-4bef-b06b-1320109b4323",
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:                 "ceph.osd_id": "2",
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:                 "ceph.type": "block",
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:                 "ceph.vdo": "0"
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:             },
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:             "type": "block",
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:             "vg_name": "ceph_vg2"
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:         }
Nov 26 11:51:07 compute-0 quirky_pascal[232179]:     ]
Nov 26 11:51:07 compute-0 quirky_pascal[232179]: }
Nov 26 11:51:07 compute-0 systemd[1]: libpod-fbcde9e61142a167dffb224cccddc6441114b82b65e3c6a4864dc7557fd45d79.scope: Deactivated successfully.
Nov 26 11:51:07 compute-0 conmon[232179]: conmon fbcde9e61142a167dffb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fbcde9e61142a167dffb224cccddc6441114b82b65e3c6a4864dc7557fd45d79.scope/container/memory.events
Nov 26 11:51:07 compute-0 podman[232074]: 2025-11-26 11:51:07.274862284 +0000 UTC m=+0.729773544 container died fbcde9e61142a167dffb224cccddc6441114b82b65e3c6a4864dc7557fd45d79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_pascal, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 26 11:51:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-674011513fb36a34db0e207f489411340de6e8e6e08e96ba0bdc1957ad1fb051-merged.mount: Deactivated successfully.
Nov 26 11:51:07 compute-0 python3.9[232985]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 11:51:07 compute-0 podman[232074]: 2025-11-26 11:51:07.312617626 +0000 UTC m=+0.767528876 container remove fbcde9e61142a167dffb224cccddc6441114b82b65e3c6a4864dc7557fd45d79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_pascal, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 26 11:51:07 compute-0 systemd[1]: libpod-conmon-fbcde9e61142a167dffb224cccddc6441114b82b65e3c6a4864dc7557fd45d79.scope: Deactivated successfully.
Nov 26 11:51:07 compute-0 systemd[1]: Stopping Open-iSCSI...
Nov 26 11:51:07 compute-0 iscsid[220654]: iscsid shutting down.
Nov 26 11:51:07 compute-0 systemd[1]: iscsid.service: Deactivated successfully.
Nov 26 11:51:07 compute-0 systemd[1]: Stopped Open-iSCSI.
Nov 26 11:51:07 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 26 11:51:07 compute-0 sudo[231592]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:07 compute-0 systemd[1]: Starting Open-iSCSI...
Nov 26 11:51:07 compute-0 systemd[1]: Started Open-iSCSI.
Nov 26 11:51:07 compute-0 sudo[232969]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:07 compute-0 sudo[233080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:51:07 compute-0 sudo[233080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:51:07 compute-0 sudo[233080]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:07 compute-0 sudo[233109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:51:07 compute-0 sudo[233109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:51:07 compute-0 sudo[233109]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:07 compute-0 sudo[233154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:51:07 compute-0 sudo[233154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:51:07 compute-0 sudo[233154]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:07 compute-0 sudo[233184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -- raw list --format json
Nov 26 11:51:07 compute-0 sudo[233184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:51:07 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v477: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:51:07 compute-0 podman[233362]: 2025-11-26 11:51:07.758541612 +0000 UTC m=+0.028925554 container create fffd0d20a9f04316e7285345cafe286043bedb959b40c59430513f4847dbb29e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_carver, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:51:07 compute-0 systemd[1]: Started libpod-conmon-fffd0d20a9f04316e7285345cafe286043bedb959b40c59430513f4847dbb29e.scope.
Nov 26 11:51:07 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:51:07 compute-0 podman[233362]: 2025-11-26 11:51:07.803944994 +0000 UTC m=+0.074328937 container init fffd0d20a9f04316e7285345cafe286043bedb959b40c59430513f4847dbb29e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_carver, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 26 11:51:07 compute-0 podman[233362]: 2025-11-26 11:51:07.810050186 +0000 UTC m=+0.080434128 container start fffd0d20a9f04316e7285345cafe286043bedb959b40c59430513f4847dbb29e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_carver, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 26 11:51:07 compute-0 nostalgic_carver[233376]: 167 167
Nov 26 11:51:07 compute-0 systemd[1]: libpod-fffd0d20a9f04316e7285345cafe286043bedb959b40c59430513f4847dbb29e.scope: Deactivated successfully.
Nov 26 11:51:07 compute-0 conmon[233376]: conmon fffd0d20a9f04316e728 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fffd0d20a9f04316e7285345cafe286043bedb959b40c59430513f4847dbb29e.scope/container/memory.events
Nov 26 11:51:07 compute-0 podman[233362]: 2025-11-26 11:51:07.814426939 +0000 UTC m=+0.084810891 container attach fffd0d20a9f04316e7285345cafe286043bedb959b40c59430513f4847dbb29e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_carver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:51:07 compute-0 podman[233362]: 2025-11-26 11:51:07.814624161 +0000 UTC m=+0.085008093 container died fffd0d20a9f04316e7285345cafe286043bedb959b40c59430513f4847dbb29e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_carver, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:51:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-a7dc17dbbf04ead723180aced83dd2ac21bc39b3bbca6655ab4b6ce639cb031a-merged.mount: Deactivated successfully.
Nov 26 11:51:07 compute-0 podman[233362]: 2025-11-26 11:51:07.834287879 +0000 UTC m=+0.104671821 container remove fffd0d20a9f04316e7285345cafe286043bedb959b40c59430513f4847dbb29e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_carver, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:51:07 compute-0 podman[233362]: 2025-11-26 11:51:07.746625823 +0000 UTC m=+0.017009785 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:51:07 compute-0 systemd[1]: libpod-conmon-fffd0d20a9f04316e7285345cafe286043bedb959b40c59430513f4847dbb29e.scope: Deactivated successfully.
Nov 26 11:51:07 compute-0 python3.9[233357]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 11:51:07 compute-0 podman[233398]: 2025-11-26 11:51:07.956475686 +0000 UTC m=+0.030148889 container create 0e92dc1883cc8d6efd77ef57ef65447930754cb220de08feb8d00c7acceed950 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bartik, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 26 11:51:07 compute-0 systemd[1]: Started libpod-conmon-0e92dc1883cc8d6efd77ef57ef65447930754cb220de08feb8d00c7acceed950.scope.
Nov 26 11:51:08 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:51:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfc5b4ab4cb0bcb2642e445094b9095bcad3f38ab7db1556e43a929eca347819/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:51:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfc5b4ab4cb0bcb2642e445094b9095bcad3f38ab7db1556e43a929eca347819/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:51:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfc5b4ab4cb0bcb2642e445094b9095bcad3f38ab7db1556e43a929eca347819/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:51:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfc5b4ab4cb0bcb2642e445094b9095bcad3f38ab7db1556e43a929eca347819/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:51:08 compute-0 podman[233398]: 2025-11-26 11:51:08.017446012 +0000 UTC m=+0.091119216 container init 0e92dc1883cc8d6efd77ef57ef65447930754cb220de08feb8d00c7acceed950 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 26 11:51:08 compute-0 podman[233398]: 2025-11-26 11:51:08.023400048 +0000 UTC m=+0.097073252 container start 0e92dc1883cc8d6efd77ef57ef65447930754cb220de08feb8d00c7acceed950 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bartik, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 26 11:51:08 compute-0 podman[233398]: 2025-11-26 11:51:08.024775352 +0000 UTC m=+0.098448556 container attach 0e92dc1883cc8d6efd77ef57ef65447930754cb220de08feb8d00c7acceed950 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bartik, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 26 11:51:08 compute-0 podman[233398]: 2025-11-26 11:51:07.945136966 +0000 UTC m=+0.018810189 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:51:08 compute-0 sudo[233569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbfxagonperjapyjouccaytyybdztttt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157868.3335695-688-120779951570877/AnsiballZ_file.py'
Nov 26 11:51:08 compute-0 sudo[233569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:51:08 compute-0 ceph-mon[74928]: pgmap v477: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:51:08 compute-0 python3.9[233571]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:51:08 compute-0 sudo[233569]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:08 compute-0 nifty_bartik[233415]: {
Nov 26 11:51:08 compute-0 nifty_bartik[233415]:     "2627095b-eef8-4027-bfef-68bf7cb6801f": {
Nov 26 11:51:08 compute-0 nifty_bartik[233415]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:51:08 compute-0 nifty_bartik[233415]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 11:51:08 compute-0 nifty_bartik[233415]:         "osd_id": 1,
Nov 26 11:51:08 compute-0 nifty_bartik[233415]:         "osd_uuid": "2627095b-eef8-4027-bfef-68bf7cb6801f",
Nov 26 11:51:08 compute-0 nifty_bartik[233415]:         "type": "bluestore"
Nov 26 11:51:08 compute-0 nifty_bartik[233415]:     },
Nov 26 11:51:08 compute-0 nifty_bartik[233415]:     "a9ad59a0-aa2e-4d92-b571-519d2d145b6a": {
Nov 26 11:51:08 compute-0 nifty_bartik[233415]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:51:08 compute-0 nifty_bartik[233415]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 11:51:08 compute-0 nifty_bartik[233415]:         "osd_id": 0,
Nov 26 11:51:08 compute-0 nifty_bartik[233415]:         "osd_uuid": "a9ad59a0-aa2e-4d92-b571-519d2d145b6a",
Nov 26 11:51:08 compute-0 nifty_bartik[233415]:         "type": "bluestore"
Nov 26 11:51:08 compute-0 nifty_bartik[233415]:     },
Nov 26 11:51:08 compute-0 nifty_bartik[233415]:     "d56156fb-7361-4bef-b06b-1320109b4323": {
Nov 26 11:51:08 compute-0 nifty_bartik[233415]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:51:08 compute-0 nifty_bartik[233415]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 11:51:08 compute-0 nifty_bartik[233415]:         "osd_id": 2,
Nov 26 11:51:08 compute-0 nifty_bartik[233415]:         "osd_uuid": "d56156fb-7361-4bef-b06b-1320109b4323",
Nov 26 11:51:08 compute-0 nifty_bartik[233415]:         "type": "bluestore"
Nov 26 11:51:08 compute-0 nifty_bartik[233415]:     }
Nov 26 11:51:08 compute-0 nifty_bartik[233415]: }
Nov 26 11:51:08 compute-0 systemd[1]: libpod-0e92dc1883cc8d6efd77ef57ef65447930754cb220de08feb8d00c7acceed950.scope: Deactivated successfully.
Nov 26 11:51:08 compute-0 podman[233398]: 2025-11-26 11:51:08.782570142 +0000 UTC m=+0.856243355 container died 0e92dc1883cc8d6efd77ef57ef65447930754cb220de08feb8d00c7acceed950 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bartik, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 26 11:51:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-bfc5b4ab4cb0bcb2642e445094b9095bcad3f38ab7db1556e43a929eca347819-merged.mount: Deactivated successfully.
Nov 26 11:51:08 compute-0 podman[233398]: 2025-11-26 11:51:08.814479541 +0000 UTC m=+0.888152744 container remove 0e92dc1883cc8d6efd77ef57ef65447930754cb220de08feb8d00c7acceed950 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bartik, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:51:08 compute-0 systemd[1]: libpod-conmon-0e92dc1883cc8d6efd77ef57ef65447930754cb220de08feb8d00c7acceed950.scope: Deactivated successfully.
Nov 26 11:51:08 compute-0 sudo[233184]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:08 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 11:51:08 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:51:08 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 11:51:08 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:51:08 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 2508b250-0d7e-452c-9478-ae935b7a545f does not exist
Nov 26 11:51:08 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev d84843ed-b441-4c27-bf4c-e712420f33d6 does not exist
Nov 26 11:51:08 compute-0 sudo[233633]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:51:08 compute-0 sudo[233633]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:51:08 compute-0 sudo[233633]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:08 compute-0 sudo[233658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 26 11:51:08 compute-0 sudo[233658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:51:08 compute-0 sudo[233658]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:09 compute-0 sudo[233808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhowlxrlzhzmjucbaqvbasxqaywnkuyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157869.0463529-699-15597553536503/AnsiballZ_systemd_service.py'
Nov 26 11:51:09 compute-0 sudo[233808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:51:09 compute-0 python3.9[233810]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 26 11:51:09 compute-0 systemd[1]: Reloading.
Nov 26 11:51:09 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v478: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:51:09 compute-0 systemd-rc-local-generator[233829]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:51:09 compute-0 systemd-sysv-generator[233832]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:51:09 compute-0 sudo[233808]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:09 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:51:09 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:51:10 compute-0 python3.9[233995]: ansible-ansible.builtin.service_facts Invoked
Nov 26 11:51:10 compute-0 network[234012]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 26 11:51:10 compute-0 network[234013]: 'network-scripts' will be removed from distribution in near future.
Nov 26 11:51:10 compute-0 network[234014]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 26 11:51:10 compute-0 ceph-mon[74928]: pgmap v478: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:51:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:51:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:51:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:51:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:51:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:51:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:51:11 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v479: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:51:11 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:51:12 compute-0 sudo[234287]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urirxmkfzdwxevqgifdupnbdatpswpdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157872.550272-718-87516660373935/AnsiballZ_systemd_service.py'
Nov 26 11:51:12 compute-0 sudo[234287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:51:12 compute-0 ceph-mon[74928]: pgmap v479: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:51:12 compute-0 python3.9[234289]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 11:51:13 compute-0 sudo[234287]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:13 compute-0 sudo[234440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdiroqpnkatfxyihoqyjgbhvhqbowecu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157873.115254-718-194966752061597/AnsiballZ_systemd_service.py'
Nov 26 11:51:13 compute-0 sudo[234440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:51:13 compute-0 python3.9[234442]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 11:51:13 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v480: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:51:13 compute-0 sudo[234440]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:13 compute-0 sudo[234593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcrcoqycyuctkendzitjtptpmfzrdhet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157873.7888706-718-8458518335169/AnsiballZ_systemd_service.py'
Nov 26 11:51:13 compute-0 sudo[234593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:51:14 compute-0 python3.9[234595]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 11:51:14 compute-0 sudo[234593]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:14 compute-0 sudo[234746]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwpmirvvrrdvtulyudmqkebzcdsptmku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157874.3351698-718-76559033604021/AnsiballZ_systemd_service.py'
Nov 26 11:51:14 compute-0 sudo[234746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:51:14 compute-0 python3.9[234748]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 11:51:14 compute-0 sudo[234746]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:14 compute-0 ceph-mon[74928]: pgmap v480: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:51:15 compute-0 sudo[234899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flqxcmcmkvppvendpnovisamaxizzscb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157874.881916-718-214647153329926/AnsiballZ_systemd_service.py'
Nov 26 11:51:15 compute-0 sudo[234899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:51:15 compute-0 python3.9[234901]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 11:51:15 compute-0 sudo[234899]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:15 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v481: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:51:15 compute-0 sudo[235052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbeqhgkbdawukdxqnrdysyqzsstmftcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157875.4466734-718-186075003914948/AnsiballZ_systemd_service.py'
Nov 26 11:51:15 compute-0 sudo[235052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:51:15 compute-0 python3.9[235054]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 11:51:15 compute-0 sudo[235052]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:15 compute-0 podman[235056]: 2025-11-26 11:51:15.992453845 +0000 UTC m=+0.058175928 container health_status cfbeb1268b73bea59211ec8c025ee3818e11127880f6c0675b5fbea39bdd577e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 26 11:51:16 compute-0 sudo[235229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwoekwtzzjxrvhhjwfmsiamdcvqgpewu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157876.0377116-718-256358052371977/AnsiballZ_systemd_service.py'
Nov 26 11:51:16 compute-0 sudo[235229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:51:16 compute-0 python3.9[235231]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 11:51:16 compute-0 sudo[235229]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:16 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:51:16 compute-0 sudo[235382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbtmcotxvspmsqvjoopxccyqhippljao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157876.5939584-718-20371852532879/AnsiballZ_systemd_service.py'
Nov 26 11:51:16 compute-0 sudo[235382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:51:16 compute-0 ceph-mon[74928]: pgmap v481: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:51:17 compute-0 python3.9[235384]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 11:51:17 compute-0 sudo[235382]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:17 compute-0 sudo[235535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzfaooesthoumwnsibgdeqidzkjggoil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157877.3063202-777-119004030914419/AnsiballZ_file.py'
Nov 26 11:51:17 compute-0 sudo[235535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:51:17 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v482: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:51:17 compute-0 python3.9[235537]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:51:17 compute-0 sudo[235535]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:17 compute-0 sudo[235687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvvlrzxjwlprdubbwhkdmmslbgyqwgbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157877.7604446-777-221194541990003/AnsiballZ_file.py'
Nov 26 11:51:17 compute-0 sudo[235687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:51:18 compute-0 python3.9[235689]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:51:18 compute-0 sudo[235687]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:18 compute-0 sudo[235839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-baqzshfvfwpiqrpbojkzdosgthjmxyvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157878.2031436-777-194734089371998/AnsiballZ_file.py'
Nov 26 11:51:18 compute-0 sudo[235839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:51:18 compute-0 python3.9[235841]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:51:18 compute-0 sudo[235839]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:18 compute-0 ceph-mon[74928]: pgmap v482: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:51:18 compute-0 sudo[235991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htrluqldpmjfjancrvatuwffcaeuxlrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157878.761735-777-148085167154336/AnsiballZ_file.py'
Nov 26 11:51:18 compute-0 sudo[235991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:51:19 compute-0 python3.9[235993]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:51:19 compute-0 sudo[235991]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:19 compute-0 sudo[236143]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llrytwkeqtcattoczuesldogsbkeezhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157879.1960948-777-76954685417437/AnsiballZ_file.py'
Nov 26 11:51:19 compute-0 sudo[236143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:51:19 compute-0 python3.9[236145]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:51:19 compute-0 sudo[236143]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:19 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v483: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:51:19 compute-0 sudo[236295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rognkybleosmtwrecsxniqqdsmlcyqjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157879.6253262-777-84394553626615/AnsiballZ_file.py'
Nov 26 11:51:19 compute-0 sudo[236295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:51:19 compute-0 python3.9[236297]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:51:19 compute-0 sudo[236295]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:20 compute-0 sudo[236447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffvevogkdtngorscflhhqqpwcbbyglqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157880.0632427-777-236618928554773/AnsiballZ_file.py'
Nov 26 11:51:20 compute-0 sudo[236447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:51:20 compute-0 python3.9[236449]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:51:20 compute-0 sudo[236447]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:20 compute-0 sudo[236599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uabtkhdzemcaljjxdwyhzhvasoheekrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157880.489614-777-211227643957840/AnsiballZ_file.py'
Nov 26 11:51:20 compute-0 sudo[236599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:51:20 compute-0 python3.9[236601]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:51:20 compute-0 sudo[236599]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:20 compute-0 ceph-mon[74928]: pgmap v483: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:51:21 compute-0 sudo[236751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfehaykuyjthpgacoewwfiqjeovzzkzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157880.9533293-834-123154308362696/AnsiballZ_file.py'
Nov 26 11:51:21 compute-0 sudo[236751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:51:21 compute-0 python3.9[236753]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:51:21 compute-0 sudo[236751]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:21 compute-0 sudo[236903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwhxgylqqzzjzjcqinjumjmhxozbbqmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157881.3894105-834-220817801309965/AnsiballZ_file.py'
Nov 26 11:51:21 compute-0 sudo[236903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:51:21 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v484: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:51:21 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:51:21 compute-0 python3.9[236905]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:51:21 compute-0 sudo[236903]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:21 compute-0 sudo[237055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itdofgtftsfppximmgxljhseftlrwfxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157881.8119235-834-1676679245780/AnsiballZ_file.py'
Nov 26 11:51:21 compute-0 sudo[237055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:51:22 compute-0 python3.9[237057]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:51:22 compute-0 sudo[237055]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:22 compute-0 sudo[237207]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouccvgiuaahtveisufuhujtxucxwxugk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157882.2528284-834-159596705672034/AnsiballZ_file.py'
Nov 26 11:51:22 compute-0 sudo[237207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:51:22 compute-0 python3.9[237209]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:51:22 compute-0 sudo[237207]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:22 compute-0 sudo[237359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttyupmjnimxqqwolqteivnlfrnpvdxib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157882.68357-834-267725941714941/AnsiballZ_file.py'
Nov 26 11:51:22 compute-0 sudo[237359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:51:22 compute-0 ceph-mon[74928]: pgmap v484: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:51:23 compute-0 python3.9[237361]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:51:23 compute-0 sudo[237359]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:23 compute-0 sudo[237511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhayamgbisedtotkajjinkcsttvtmmye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157883.109782-834-11242476605768/AnsiballZ_file.py'
Nov 26 11:51:23 compute-0 sudo[237511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:51:23 compute-0 python3.9[237513]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:51:23 compute-0 sudo[237511]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:23 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v485: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:51:23 compute-0 sudo[237663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwymjiuizadwzazzrajpemdfjpntayej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157883.615279-834-148558769998243/AnsiballZ_file.py'
Nov 26 11:51:23 compute-0 sudo[237663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:51:23 compute-0 ceph-mon[74928]: pgmap v485: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:51:23 compute-0 python3.9[237665]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:51:23 compute-0 sudo[237663]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:24 compute-0 sudo[237815]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytemsyjydmbxnvdgugcbobbpkfzyefxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157884.0456605-834-109480361736233/AnsiballZ_file.py'
Nov 26 11:51:24 compute-0 sudo[237815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:51:24 compute-0 python3.9[237817]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:51:24 compute-0 sudo[237815]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:24 compute-0 sudo[237967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlytclkdzhvglithqlplznlfmjkxgpfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157884.6284244-892-138029576112827/AnsiballZ_command.py'
Nov 26 11:51:24 compute-0 sudo[237967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:51:24 compute-0 python3.9[237969]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:51:24 compute-0 sudo[237967]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:25 compute-0 python3.9[238121]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 26 11:51:25 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v486: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:51:25 compute-0 sudo[238271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvtcmravjpcowvbvxdbgigbhisjhafmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157885.7551818-910-111306052188703/AnsiballZ_systemd_service.py'
Nov 26 11:51:25 compute-0 sudo[238271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:51:26 compute-0 python3.9[238273]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 26 11:51:26 compute-0 systemd[1]: Reloading.
Nov 26 11:51:26 compute-0 systemd-rc-local-generator[238294]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:51:26 compute-0 systemd-sysv-generator[238297]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:51:26 compute-0 sudo[238271]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:26 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:51:26 compute-0 ceph-mon[74928]: pgmap v486: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:51:26 compute-0 sudo[238458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzipqwtgwlxlqixiothzbbggijubgqtw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157886.5828054-918-168983318012751/AnsiballZ_command.py'
Nov 26 11:51:26 compute-0 sudo[238458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:51:26 compute-0 python3.9[238460]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:51:26 compute-0 sudo[238458]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:27 compute-0 sudo[238611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txgmcobhnifcytibutoshwtgeqiucozs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157887.0286057-918-245046168569230/AnsiballZ_command.py'
Nov 26 11:51:27 compute-0 sudo[238611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:51:27 compute-0 python3.9[238613]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:51:27 compute-0 sudo[238611]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:27 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v487: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:51:27 compute-0 sudo[238764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkjzcqsuvlxtdebomkxmthrgyrydjjee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157887.4647598-918-3091401034623/AnsiballZ_command.py'
Nov 26 11:51:27 compute-0 sudo[238764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:51:27 compute-0 python3.9[238766]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:51:27 compute-0 sudo[238764]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:28 compute-0 sudo[238917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mplbodurfjtdvjpjxqaaerxlkqwoxjpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157887.9895854-918-139087009353365/AnsiballZ_command.py'
Nov 26 11:51:28 compute-0 sudo[238917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:51:28 compute-0 python3.9[238919]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:51:28 compute-0 sudo[238917]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:28 compute-0 sudo[239070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lamcidjdjwibxsniqrgqwxbxjwcyteps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157888.4239688-918-57842333384802/AnsiballZ_command.py'
Nov 26 11:51:28 compute-0 sudo[239070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:51:28 compute-0 ceph-mon[74928]: pgmap v487: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:51:28 compute-0 python3.9[239072]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:51:28 compute-0 sudo[239070]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:29 compute-0 sudo[239237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-veoygaazpsjiahfrexkidddiokhyqwyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157888.861876-918-123759625814078/AnsiballZ_command.py'
Nov 26 11:51:29 compute-0 sudo[239237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:51:29 compute-0 podman[239197]: 2025-11-26 11:51:29.059121233 +0000 UTC m=+0.040199009 container health_status b46e4f533fc070f3c487ba2bec68d1eecba141ffd4a1dcc9b4c347110dd51e0e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 26 11:51:29 compute-0 python3.9[239242]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:51:29 compute-0 sudo[239237]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:29 compute-0 sudo[239393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umlvepsnoskupfnntlxdmbdcqshotjvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157889.326811-918-79446231426929/AnsiballZ_command.py'
Nov 26 11:51:29 compute-0 sudo[239393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:51:29 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v488: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:51:29 compute-0 python3.9[239395]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:51:29 compute-0 sudo[239393]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:29 compute-0 sudo[239546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtbppfodizhqqkrvwayyxylhqukeyqlp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157889.7538385-918-56636816390417/AnsiballZ_command.py'
Nov 26 11:51:29 compute-0 sudo[239546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:51:30 compute-0 python3.9[239548]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 11:51:30 compute-0 sudo[239546]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:30 compute-0 ceph-mon[74928]: pgmap v488: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:51:31 compute-0 sudo[239699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvbznjzcelvhxrjosmyutjbeianhjoea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157890.8294132-997-154850551578368/AnsiballZ_file.py'
Nov 26 11:51:31 compute-0 sudo[239699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:51:31 compute-0 python3.9[239701]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:51:31 compute-0 sudo[239699]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:31 compute-0 sudo[239851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxrtklojdqvxkiusvbnuhvfygleenbcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157891.2735274-997-21784948137255/AnsiballZ_file.py'
Nov 26 11:51:31 compute-0 sudo[239851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:51:31 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v489: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:51:31 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:51:31 compute-0 python3.9[239853]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:51:31 compute-0 sudo[239851]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:31 compute-0 sudo[240003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhqjyddtbxyrrbxvjuvimhkzybztdhdu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157891.713228-997-32802811556954/AnsiballZ_file.py'
Nov 26 11:51:31 compute-0 sudo[240003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:51:32 compute-0 python3.9[240005]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:51:32 compute-0 sudo[240003]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:32 compute-0 sudo[240155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfrdotgkrddofbdbtmntaamdglwardfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157892.1923578-1019-134816934516851/AnsiballZ_file.py'
Nov 26 11:51:32 compute-0 sudo[240155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:51:32 compute-0 python3.9[240157]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:51:32 compute-0 sudo[240155]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:32 compute-0 ceph-mon[74928]: pgmap v489: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:51:32 compute-0 sudo[240307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aifudpivonytbjwpbdpeuugzmcdhbblq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157892.6636877-1019-224483580305904/AnsiballZ_file.py'
Nov 26 11:51:32 compute-0 sudo[240307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:51:32 compute-0 python3.9[240309]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:51:33 compute-0 sudo[240307]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:33 compute-0 sudo[240459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkhqixyxcywwwognjwbyymyrnvfatuby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157893.104161-1019-25230145531977/AnsiballZ_file.py'
Nov 26 11:51:33 compute-0 sudo[240459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:51:33 compute-0 python3.9[240461]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:51:33 compute-0 sudo[240459]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:33 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v490: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:51:33 compute-0 sudo[240611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cldvnunwsndtjtvaweoobjsqbaybwhrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157893.561016-1019-202280219876629/AnsiballZ_file.py'
Nov 26 11:51:33 compute-0 sudo[240611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:51:33 compute-0 python3.9[240613]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:51:33 compute-0 sudo[240611]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:34 compute-0 sudo[240763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcggfzgfvtyzqnppqapocetdrzcpgyyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157894.0013072-1019-184060980914753/AnsiballZ_file.py'
Nov 26 11:51:34 compute-0 sudo[240763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:51:34 compute-0 python3.9[240765]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:51:34 compute-0 sudo[240763]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:34 compute-0 sudo[240915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jchmgcjtlrhtzuvmmjvqlpiiemmdvdfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157894.4493375-1019-201896353631325/AnsiballZ_file.py'
Nov 26 11:51:34 compute-0 sudo[240915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:51:34 compute-0 ceph-mon[74928]: pgmap v490: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:51:34 compute-0 python3.9[240917]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:51:34 compute-0 sudo[240915]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:35 compute-0 sudo[241067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmekxeoinsrtvtkeedhqzoknsmawbmst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157894.8822849-1019-128289445159445/AnsiballZ_file.py'
Nov 26 11:51:35 compute-0 sudo[241067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:51:35 compute-0 python3.9[241069]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:51:35 compute-0 sudo[241067]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:35 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v491: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:51:35 compute-0 podman[241094]: 2025-11-26 11:51:35.611229604 +0000 UTC m=+0.036452396 container health_status 5f401b83ca5f86ed33da7d010b1da1561564f07ce95b2e756c93a36d59e54803 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 26 11:51:36 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Nov 26 11:51:36 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:51:36 compute-0 ceph-mon[74928]: pgmap v491: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:51:37 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 26 11:51:37 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v492: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:51:38 compute-0 ceph-mon[74928]: pgmap v492: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:51:39 compute-0 sudo[241237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnczruljjekljxlozekgzlnuboqjbfip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157899.1346562-1208-1359101240935/AnsiballZ_getent.py'
Nov 26 11:51:39 compute-0 sudo[241237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:51:39 compute-0 python3.9[241239]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Nov 26 11:51:39 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v493: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:51:39 compute-0 sudo[241237]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:39 compute-0 sudo[241390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffjqnurspzohljqiipebrhjeguvcbilp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157899.7129512-1216-50842824365711/AnsiballZ_group.py'
Nov 26 11:51:39 compute-0 sudo[241390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:51:40 compute-0 python3.9[241392]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 26 11:51:40 compute-0 groupadd[241393]: group added to /etc/group: name=nova, GID=42436
Nov 26 11:51:40 compute-0 groupadd[241393]: group added to /etc/gshadow: name=nova
Nov 26 11:51:40 compute-0 groupadd[241393]: new group: name=nova, GID=42436
Nov 26 11:51:40 compute-0 sudo[241390]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:40 compute-0 ceph-mon[74928]: pgmap v493: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:51:40 compute-0 sudo[241548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgmdtwkhunywutpanxgkwnajepyuycuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157900.3324437-1224-189145989737516/AnsiballZ_user.py'
Nov 26 11:51:40 compute-0 sudo[241548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:51:40 compute-0 python3.9[241550]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 26 11:51:40 compute-0 useradd[241552]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Nov 26 11:51:40 compute-0 rsyslogd[960]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 11:51:40 compute-0 rsyslogd[960]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 11:51:40 compute-0 useradd[241552]: add 'nova' to group 'libvirt'
Nov 26 11:51:40 compute-0 useradd[241552]: add 'nova' to shadow group 'libvirt'
Nov 26 11:51:40 compute-0 sudo[241548]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:41 compute-0 ceph-mgr[75197]: [balancer INFO root] Optimize plan auto_2025-11-26_11:51:41
Nov 26 11:51:41 compute-0 ceph-mgr[75197]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 11:51:41 compute-0 ceph-mgr[75197]: [balancer INFO root] do_upmap
Nov 26 11:51:41 compute-0 ceph-mgr[75197]: [balancer INFO root] pools ['backups', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.data', 'images', '.rgw.root', '.mgr', 'vms']
Nov 26 11:51:41 compute-0 ceph-mgr[75197]: [balancer INFO root] prepared 0/10 changes
Nov 26 11:51:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:51:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:51:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:51:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:51:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:51:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:51:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 11:51:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 11:51:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 11:51:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 11:51:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 11:51:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 11:51:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 11:51:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 11:51:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 11:51:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 11:51:41 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v494: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:51:41 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:51:41 compute-0 sshd-session[241584]: Accepted publickey for zuul from 192.168.122.30 port 42938 ssh2: ECDSA SHA256:Ri1FttUDYah1mI7cQP4QF8tE4Jqe/KzhJvhCRDggR5A
Nov 26 11:51:41 compute-0 systemd-logind[744]: New session 50 of user zuul.
Nov 26 11:51:41 compute-0 systemd[1]: Started Session 50 of User zuul.
Nov 26 11:51:41 compute-0 sshd-session[241584]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 26 11:51:41 compute-0 sshd-session[241587]: Received disconnect from 192.168.122.30 port 42938:11: disconnected by user
Nov 26 11:51:41 compute-0 sshd-session[241587]: Disconnected from user zuul 192.168.122.30 port 42938
Nov 26 11:51:41 compute-0 sshd-session[241584]: pam_unix(sshd:session): session closed for user zuul
Nov 26 11:51:41 compute-0 systemd-logind[744]: Session 50 logged out. Waiting for processes to exit.
Nov 26 11:51:41 compute-0 systemd[1]: session-50.scope: Deactivated successfully.
Nov 26 11:51:41 compute-0 systemd-logind[744]: Removed session 50.
Nov 26 11:51:42 compute-0 python3.9[241737]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:51:42 compute-0 python3.9[241858]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764157901.8860073-1249-151445537586028/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:51:42 compute-0 ceph-mon[74928]: pgmap v494: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:51:43 compute-0 python3.9[242008]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:51:43 compute-0 python3.9[242084]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:51:43 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v495: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:51:43 compute-0 python3.9[242234]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:51:44 compute-0 python3.9[242355]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764157903.441012-1249-263885341297549/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:51:44 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 26 11:51:44 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Nov 26 11:51:44 compute-0 python3.9[242507]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:51:44 compute-0 ceph-mon[74928]: pgmap v495: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:51:44 compute-0 python3.9[242628]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764157904.237312-1249-23251168200229/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:51:45 compute-0 python3.9[242778]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:51:45 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v496: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:51:45 compute-0 python3.9[242899]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764157905.0343292-1249-86269878547100/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:51:46 compute-0 podman[243023]: 2025-11-26 11:51:46.174270069 +0000 UTC m=+0.059717174 container health_status cfbeb1268b73bea59211ec8c025ee3818e11127880f6c0675b5fbea39bdd577e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:51:46 compute-0 python3.9[243059]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:51:46 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:51:46 compute-0 python3.9[243193]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764157905.9412181-1249-43831827830192/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:51:46 compute-0 ceph-mon[74928]: pgmap v496: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:51:47 compute-0 sudo[243343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clljxmlzaxzgrzvssuaitngdkilwphbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157906.8384006-1332-105686200283713/AnsiballZ_file.py'
Nov 26 11:51:47 compute-0 sudo[243343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:51:47 compute-0 python3.9[243345]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:51:47 compute-0 sudo[243343]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:47 compute-0 sudo[243495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmmqmugmaylgdzqthvxdbjkbuqjaalto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157907.3132827-1340-278597133698434/AnsiballZ_copy.py'
Nov 26 11:51:47 compute-0 sudo[243495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:51:47 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v497: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:51:47 compute-0 python3.9[243497]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:51:47 compute-0 sudo[243495]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:47 compute-0 sudo[243647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwastaxnovzxwfvgvbazhvjxomivqcwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157907.798022-1348-28061023767163/AnsiballZ_stat.py'
Nov 26 11:51:47 compute-0 sudo[243647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:51:48 compute-0 python3.9[243649]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 11:51:48 compute-0 sudo[243647]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:48 compute-0 sudo[243799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkmsllbuzbzeeqjbntiipqhaniclagxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157908.2562366-1356-262096066092797/AnsiballZ_stat.py'
Nov 26 11:51:48 compute-0 sudo[243799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:51:48 compute-0 python3.9[243801]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:51:48 compute-0 sudo[243799]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:48 compute-0 ceph-mon[74928]: pgmap v497: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:51:48 compute-0 sudo[243922]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlwwqdyugiyahcqpdtbfxtzigajfnuwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157908.2562366-1356-262096066092797/AnsiballZ_copy.py'
Nov 26 11:51:48 compute-0 sudo[243922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:51:48 compute-0 python3.9[243924]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1764157908.2562366-1356-262096066092797/.source _original_basename=.xeuojx5a follow=False checksum=e8aab3c5278a23a57fc96d653cfb8f4ef41cdd10 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Nov 26 11:51:48 compute-0 sudo[243922]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:49 compute-0 python3.9[244076]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 11:51:49 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v498: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:51:49 compute-0 python3.9[244228]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:51:50 compute-0 python3.9[244349]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764157909.6431334-1382-115661243384355/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=4c77b2c041a7564aa2c84115117dc8517e9bb9ef backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:51:50 compute-0 ceph-mon[74928]: pgmap v498: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:51:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 11:51:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:51:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 11:51:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:51:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:51:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:51:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:51:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:51:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:51:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:51:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:51:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:51:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 26 11:51:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:51:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:51:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:51:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 11:51:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:51:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 11:51:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:51:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:51:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:51:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 11:51:50 compute-0 python3.9[244499]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 11:51:51 compute-0 python3.9[244620]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764157910.471989-1397-140131319943510/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=941d5739094d046b86479403aeaaf0441b82ba11 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 11:51:51 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v499: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:51:51 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:51:51 compute-0 sudo[244770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqabhnhuxduepiefarybzniupcfoavbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157911.4347441-1414-66973081306068/AnsiballZ_container_config_data.py'
Nov 26 11:51:51 compute-0 sudo[244770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:51:51 compute-0 python3.9[244772]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Nov 26 11:51:51 compute-0 sudo[244770]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:52 compute-0 sudo[244922]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggtgiakdmochymvwytpzqkmfbqmumacq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157911.9953818-1423-176049482616182/AnsiballZ_container_config_hash.py'
Nov 26 11:51:52 compute-0 sudo[244922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:51:52 compute-0 python3.9[244924]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 26 11:51:52 compute-0 sudo[244922]: pam_unix(sudo:session): session closed for user root
Nov 26 11:51:52 compute-0 ceph-mon[74928]: pgmap v499: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:51:52 compute-0 sudo[245074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asilvmlqkpvtpahdaznfqfiumrgtzgan ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764157912.5801022-1433-115407846977498/AnsiballZ_edpm_container_manage.py'
Nov 26 11:51:52 compute-0 sudo[245074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:51:52 compute-0 python3[245076]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Nov 26 11:51:53 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v500: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:51:54 compute-0 ceph-mon[74928]: pgmap v500: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:51:55 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v501: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:51:56 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:51:56 compute-0 ceph-mon[74928]: pgmap v501: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:51:57 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v502: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:51:58 compute-0 ceph-mon[74928]: pgmap v502: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:51:59 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v503: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:52:00 compute-0 ceph-mon[74928]: pgmap v503: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:52:01 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v504: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:52:01 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:52:02 compute-0 ceph-mon[74928]: pgmap v504: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:52:02 compute-0 podman[245136]: 2025-11-26 11:52:02.824063526 +0000 UTC m=+3.250544354 container health_status b46e4f533fc070f3c487ba2bec68d1eecba141ffd4a1dcc9b4c347110dd51e0e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 26 11:52:02 compute-0 podman[245087]: 2025-11-26 11:52:02.845843465 +0000 UTC m=+9.823130157 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076
Nov 26 11:52:02 compute-0 podman[245170]: 2025-11-26 11:52:02.941193948 +0000 UTC m=+0.028319460 container create 7286887c5bc189491a9ce67c729d49652f375cb93b520d1b03714580ccb97df8 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 26 11:52:02 compute-0 podman[245170]: 2025-11-26 11:52:02.927751139 +0000 UTC m=+0.014876672 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076
Nov 26 11:52:02 compute-0 python3[245076]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076 bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Nov 26 11:52:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:52:02.985 159928 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 11:52:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:52:02.986 159928 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 11:52:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:52:02.986 159928 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 11:52:03 compute-0 sudo[245074]: pam_unix(sudo:session): session closed for user root
Nov 26 11:52:03 compute-0 sudo[245348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ioonnkolccyxzaydoajjykqqizweyvlj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157923.1623986-1441-6854991175646/AnsiballZ_stat.py'
Nov 26 11:52:03 compute-0 sudo[245348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:52:03 compute-0 python3.9[245350]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 11:52:03 compute-0 sudo[245348]: pam_unix(sudo:session): session closed for user root
Nov 26 11:52:03 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v505: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:52:04 compute-0 sudo[245502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fyfacqvyvepdxuqwbfndujbqiefjwoxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157923.823334-1453-194614073532904/AnsiballZ_container_config_data.py'
Nov 26 11:52:04 compute-0 sudo[245502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:52:04 compute-0 python3.9[245504]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Nov 26 11:52:04 compute-0 sudo[245502]: pam_unix(sudo:session): session closed for user root
Nov 26 11:52:04 compute-0 sudo[245654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thtzyjsfpmoibiuzvewqhveklyryzghw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157924.3600812-1462-189418129317256/AnsiballZ_container_config_hash.py'
Nov 26 11:52:04 compute-0 sudo[245654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:52:04 compute-0 python3.9[245656]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 26 11:52:04 compute-0 sudo[245654]: pam_unix(sudo:session): session closed for user root
Nov 26 11:52:04 compute-0 ceph-mon[74928]: pgmap v505: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:52:05 compute-0 sudo[245806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ousdovudwkizhnnpnvkpxikkulbmqskv ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764157924.9392226-1472-25909713944561/AnsiballZ_edpm_container_manage.py'
Nov 26 11:52:05 compute-0 sudo[245806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:52:05 compute-0 python3[245808]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Nov 26 11:52:05 compute-0 podman[245835]: 2025-11-26 11:52:05.44828717 +0000 UTC m=+0.028131777 container create ef587bf8366fd71bc847ac7103a8684e40edd037254dd6eab3d7216a89ea1832 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=nova_compute, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 26 11:52:05 compute-0 podman[245835]: 2025-11-26 11:52:05.434025116 +0000 UTC m=+0.013869754 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076
Nov 26 11:52:05 compute-0 python3[245808]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076 kolla_start
Nov 26 11:52:05 compute-0 sudo[245806]: pam_unix(sudo:session): session closed for user root
Nov 26 11:52:05 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v506: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:52:05 compute-0 sudo[246020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yyubdnpkcwkwouxepmjxepwpimwhrhcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157925.6563308-1480-136888570899818/AnsiballZ_stat.py'
Nov 26 11:52:05 compute-0 sudo[246020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:52:05 compute-0 podman[245985]: 2025-11-26 11:52:05.854143398 +0000 UTC m=+0.037346736 container health_status 5f401b83ca5f86ed33da7d010b1da1561564f07ce95b2e756c93a36d59e54803 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 26 11:52:05 compute-0 python3.9[246030]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 11:52:06 compute-0 sudo[246020]: pam_unix(sudo:session): session closed for user root
Nov 26 11:52:06 compute-0 sudo[246183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfqamqbxlulumebzkjzcadbhbqceavkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157926.2043543-1489-253518706512693/AnsiballZ_file.py'
Nov 26 11:52:06 compute-0 sudo[246183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:52:06 compute-0 python3.9[246185]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:52:06 compute-0 sudo[246183]: pam_unix(sudo:session): session closed for user root
Nov 26 11:52:06 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:52:06 compute-0 ceph-mon[74928]: pgmap v506: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:52:06 compute-0 sudo[246334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srtnnvjtxgyvkxakbjgtqmjozbxzpzwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157926.599798-1489-49946180765486/AnsiballZ_copy.py'
Nov 26 11:52:06 compute-0 sudo[246334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:52:07 compute-0 python3.9[246336]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764157926.599798-1489-49946180765486/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 11:52:07 compute-0 sudo[246334]: pam_unix(sudo:session): session closed for user root
Nov 26 11:52:07 compute-0 sudo[246410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osutmcpbovrscbzwprrvpasuzbecvyxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157926.599798-1489-49946180765486/AnsiballZ_systemd.py'
Nov 26 11:52:07 compute-0 sudo[246410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:52:07 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v507: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:52:08 compute-0 python3.9[246412]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 26 11:52:08 compute-0 systemd[1]: Reloading.
Nov 26 11:52:08 compute-0 systemd-rc-local-generator[246437]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:52:08 compute-0 systemd-sysv-generator[246440]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:52:08 compute-0 sudo[246410]: pam_unix(sudo:session): session closed for user root
Nov 26 11:52:08 compute-0 sudo[246521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkaawvqdjbisskljswloigwenruujyio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157926.599798-1489-49946180765486/AnsiballZ_systemd.py'
Nov 26 11:52:08 compute-0 sudo[246521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:52:08 compute-0 ceph-mon[74928]: pgmap v507: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:52:08 compute-0 python3.9[246523]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 11:52:08 compute-0 systemd[1]: Reloading.
Nov 26 11:52:09 compute-0 systemd-sysv-generator[246573]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 11:52:09 compute-0 systemd-rc-local-generator[246570]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 11:52:09 compute-0 sudo[246526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:52:09 compute-0 sudo[246526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:52:09 compute-0 sudo[246526]: pam_unix(sudo:session): session closed for user root
Nov 26 11:52:09 compute-0 systemd[1]: Starting nova_compute container...
Nov 26 11:52:09 compute-0 sudo[246589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:52:09 compute-0 sudo[246589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:52:09 compute-0 sudo[246589]: pam_unix(sudo:session): session closed for user root
Nov 26 11:52:09 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:52:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f3e688ab49987e793038b44b4ae7ebdf4f0659aafe1a1226131f39019fac7ba/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 26 11:52:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f3e688ab49987e793038b44b4ae7ebdf4f0659aafe1a1226131f39019fac7ba/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 26 11:52:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f3e688ab49987e793038b44b4ae7ebdf4f0659aafe1a1226131f39019fac7ba/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 26 11:52:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f3e688ab49987e793038b44b4ae7ebdf4f0659aafe1a1226131f39019fac7ba/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 26 11:52:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f3e688ab49987e793038b44b4ae7ebdf4f0659aafe1a1226131f39019fac7ba/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 26 11:52:09 compute-0 podman[246588]: 2025-11-26 11:52:09.301691539 +0000 UTC m=+0.073245406 container init ef587bf8366fd71bc847ac7103a8684e40edd037254dd6eab3d7216a89ea1832 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=nova_compute)
Nov 26 11:52:09 compute-0 sudo[246626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:52:09 compute-0 podman[246588]: 2025-11-26 11:52:09.307240403 +0000 UTC m=+0.078794260 container start ef587bf8366fd71bc847ac7103a8684e40edd037254dd6eab3d7216a89ea1832 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Nov 26 11:52:09 compute-0 sudo[246626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:52:09 compute-0 podman[246588]: nova_compute
Nov 26 11:52:09 compute-0 sudo[246626]: pam_unix(sudo:session): session closed for user root
Nov 26 11:52:09 compute-0 nova_compute[246627]: + sudo -E kolla_set_configs
Nov 26 11:52:09 compute-0 systemd[1]: Started nova_compute container.
Nov 26 11:52:09 compute-0 sudo[246521]: pam_unix(sudo:session): session closed for user root
Nov 26 11:52:09 compute-0 sudo[246657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 26 11:52:09 compute-0 sudo[246657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:52:09 compute-0 nova_compute[246627]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 26 11:52:09 compute-0 nova_compute[246627]: INFO:__main__:Validating config file
Nov 26 11:52:09 compute-0 nova_compute[246627]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 26 11:52:09 compute-0 nova_compute[246627]: INFO:__main__:Copying service configuration files
Nov 26 11:52:09 compute-0 nova_compute[246627]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 26 11:52:09 compute-0 nova_compute[246627]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 26 11:52:09 compute-0 nova_compute[246627]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 26 11:52:09 compute-0 nova_compute[246627]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 26 11:52:09 compute-0 nova_compute[246627]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 26 11:52:09 compute-0 nova_compute[246627]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 26 11:52:09 compute-0 nova_compute[246627]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 26 11:52:09 compute-0 nova_compute[246627]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 26 11:52:09 compute-0 nova_compute[246627]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 26 11:52:09 compute-0 nova_compute[246627]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 26 11:52:09 compute-0 nova_compute[246627]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 26 11:52:09 compute-0 nova_compute[246627]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 26 11:52:09 compute-0 nova_compute[246627]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 26 11:52:09 compute-0 nova_compute[246627]: INFO:__main__:Deleting /etc/ceph
Nov 26 11:52:09 compute-0 nova_compute[246627]: INFO:__main__:Creating directory /etc/ceph
Nov 26 11:52:09 compute-0 nova_compute[246627]: INFO:__main__:Setting permission for /etc/ceph
Nov 26 11:52:09 compute-0 nova_compute[246627]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 26 11:52:09 compute-0 nova_compute[246627]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 26 11:52:09 compute-0 nova_compute[246627]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 26 11:52:09 compute-0 nova_compute[246627]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 26 11:52:09 compute-0 nova_compute[246627]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 26 11:52:09 compute-0 nova_compute[246627]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 26 11:52:09 compute-0 nova_compute[246627]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 26 11:52:09 compute-0 nova_compute[246627]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 26 11:52:09 compute-0 nova_compute[246627]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 26 11:52:09 compute-0 nova_compute[246627]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 26 11:52:09 compute-0 nova_compute[246627]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 26 11:52:09 compute-0 nova_compute[246627]: INFO:__main__:Writing out command to execute
Nov 26 11:52:09 compute-0 nova_compute[246627]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 26 11:52:09 compute-0 nova_compute[246627]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 26 11:52:09 compute-0 nova_compute[246627]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 26 11:52:09 compute-0 nova_compute[246627]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 26 11:52:09 compute-0 nova_compute[246627]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 26 11:52:09 compute-0 nova_compute[246627]: ++ cat /run_command
Nov 26 11:52:09 compute-0 nova_compute[246627]: + CMD=nova-compute
Nov 26 11:52:09 compute-0 nova_compute[246627]: + ARGS=
Nov 26 11:52:09 compute-0 nova_compute[246627]: + sudo kolla_copy_cacerts
Nov 26 11:52:09 compute-0 nova_compute[246627]: + [[ ! -n '' ]]
Nov 26 11:52:09 compute-0 nova_compute[246627]: + . kolla_extend_start
Nov 26 11:52:09 compute-0 nova_compute[246627]: Running command: 'nova-compute'
Nov 26 11:52:09 compute-0 nova_compute[246627]: + echo 'Running command: '\''nova-compute'\'''
Nov 26 11:52:09 compute-0 nova_compute[246627]: + umask 0022
Nov 26 11:52:09 compute-0 nova_compute[246627]: + exec nova-compute
Nov 26 11:52:09 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v508: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:52:09 compute-0 sudo[246657]: pam_unix(sudo:session): session closed for user root
Nov 26 11:52:09 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 11:52:09 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:52:09 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 11:52:09 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 11:52:09 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 11:52:09 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:52:09 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 0703b94f-a447-43bc-aecf-6df25254e20d does not exist
Nov 26 11:52:09 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 4df1eb08-5344-41fb-8a22-e29083d8015e does not exist
Nov 26 11:52:09 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 9f7330f7-9494-4e72-9274-f250c36bc0d2 does not exist
Nov 26 11:52:09 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 11:52:09 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 11:52:09 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 11:52:09 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 11:52:09 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 11:52:09 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:52:09 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:52:09 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 11:52:09 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:52:09 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 11:52:09 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 11:52:09 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:52:09 compute-0 sudo[246784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:52:09 compute-0 sudo[246784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:52:09 compute-0 sudo[246784]: pam_unix(sudo:session): session closed for user root
Nov 26 11:52:09 compute-0 sudo[246823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:52:09 compute-0 sudo[246823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:52:09 compute-0 sudo[246823]: pam_unix(sudo:session): session closed for user root
Nov 26 11:52:09 compute-0 sudo[246866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:52:09 compute-0 sudo[246866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:52:09 compute-0 sudo[246866]: pam_unix(sudo:session): session closed for user root
Nov 26 11:52:09 compute-0 sudo[246916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 26 11:52:09 compute-0 sudo[246916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:52:10 compute-0 python3.9[246964]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 11:52:10 compute-0 podman[247016]: 2025-11-26 11:52:10.159456297 +0000 UTC m=+0.026035341 container create 30847e07ec0adf646d845521dc043b012e410e73147d610caa471aa22f5a4506 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 26 11:52:10 compute-0 systemd[1]: Started libpod-conmon-30847e07ec0adf646d845521dc043b012e410e73147d610caa471aa22f5a4506.scope.
Nov 26 11:52:10 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:52:10 compute-0 podman[247016]: 2025-11-26 11:52:10.211626596 +0000 UTC m=+0.078205639 container init 30847e07ec0adf646d845521dc043b012e410e73147d610caa471aa22f5a4506 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:52:10 compute-0 podman[247016]: 2025-11-26 11:52:10.217010469 +0000 UTC m=+0.083589513 container start 30847e07ec0adf646d845521dc043b012e410e73147d610caa471aa22f5a4506 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 26 11:52:10 compute-0 strange_rhodes[247035]: 167 167
Nov 26 11:52:10 compute-0 podman[247016]: 2025-11-26 11:52:10.220354114 +0000 UTC m=+0.086933168 container attach 30847e07ec0adf646d845521dc043b012e410e73147d610caa471aa22f5a4506 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:52:10 compute-0 systemd[1]: libpod-30847e07ec0adf646d845521dc043b012e410e73147d610caa471aa22f5a4506.scope: Deactivated successfully.
Nov 26 11:52:10 compute-0 conmon[247035]: conmon 30847e07ec0adf646d84 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-30847e07ec0adf646d845521dc043b012e410e73147d610caa471aa22f5a4506.scope/container/memory.events
Nov 26 11:52:10 compute-0 podman[247016]: 2025-11-26 11:52:10.222099295 +0000 UTC m=+0.088678350 container died 30847e07ec0adf646d845521dc043b012e410e73147d610caa471aa22f5a4506 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:52:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a30742febb467a9e33e34fea770e7fdf2771805f46202507d6c4b71c0e10f8f-merged.mount: Deactivated successfully.
Nov 26 11:52:10 compute-0 podman[247016]: 2025-11-26 11:52:10.242522863 +0000 UTC m=+0.109101906 container remove 30847e07ec0adf646d845521dc043b012e410e73147d610caa471aa22f5a4506 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_rhodes, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 26 11:52:10 compute-0 podman[247016]: 2025-11-26 11:52:10.148470068 +0000 UTC m=+0.015049122 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:52:10 compute-0 systemd[1]: libpod-conmon-30847e07ec0adf646d845521dc043b012e410e73147d610caa471aa22f5a4506.scope: Deactivated successfully.
Nov 26 11:52:10 compute-0 podman[247079]: 2025-11-26 11:52:10.367302886 +0000 UTC m=+0.031410798 container create 3d7b8a1a7aa1dabf4c326a79c5caf4b4ead01018fdc228b7d1e9adfb6971466e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_maxwell, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:52:10 compute-0 systemd[1]: Started libpod-conmon-3d7b8a1a7aa1dabf4c326a79c5caf4b4ead01018fdc228b7d1e9adfb6971466e.scope.
Nov 26 11:52:10 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:52:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3dee8f37cf1315f2cd36dabe7c9bb0ed8a2db31799d9ef8c377294f75e46b17f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:52:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3dee8f37cf1315f2cd36dabe7c9bb0ed8a2db31799d9ef8c377294f75e46b17f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:52:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3dee8f37cf1315f2cd36dabe7c9bb0ed8a2db31799d9ef8c377294f75e46b17f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:52:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3dee8f37cf1315f2cd36dabe7c9bb0ed8a2db31799d9ef8c377294f75e46b17f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:52:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3dee8f37cf1315f2cd36dabe7c9bb0ed8a2db31799d9ef8c377294f75e46b17f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 11:52:10 compute-0 podman[247079]: 2025-11-26 11:52:10.426061962 +0000 UTC m=+0.090169863 container init 3d7b8a1a7aa1dabf4c326a79c5caf4b4ead01018fdc228b7d1e9adfb6971466e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_maxwell, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:52:10 compute-0 podman[247079]: 2025-11-26 11:52:10.435554604 +0000 UTC m=+0.099662506 container start 3d7b8a1a7aa1dabf4c326a79c5caf4b4ead01018fdc228b7d1e9adfb6971466e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_maxwell, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:52:10 compute-0 podman[247079]: 2025-11-26 11:52:10.437186631 +0000 UTC m=+0.101294533 container attach 3d7b8a1a7aa1dabf4c326a79c5caf4b4ead01018fdc228b7d1e9adfb6971466e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:52:10 compute-0 podman[247079]: 2025-11-26 11:52:10.352886989 +0000 UTC m=+0.016994912 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:52:10 compute-0 python3.9[247200]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 11:52:10 compute-0 ceph-mon[74928]: pgmap v508: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:52:11 compute-0 python3.9[247358]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 11:52:11 compute-0 awesome_maxwell[247122]: --> passed data devices: 0 physical, 3 LVM
Nov 26 11:52:11 compute-0 awesome_maxwell[247122]: --> relative data size: 1.0
Nov 26 11:52:11 compute-0 awesome_maxwell[247122]: --> All data devices are unavailable
Nov 26 11:52:11 compute-0 nova_compute[246627]: 2025-11-26 11:52:11.245 246654 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 26 11:52:11 compute-0 nova_compute[246627]: 2025-11-26 11:52:11.245 246654 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 26 11:52:11 compute-0 nova_compute[246627]: 2025-11-26 11:52:11.245 246654 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 26 11:52:11 compute-0 nova_compute[246627]: 2025-11-26 11:52:11.245 246654 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Nov 26 11:52:11 compute-0 systemd[1]: libpod-3d7b8a1a7aa1dabf4c326a79c5caf4b4ead01018fdc228b7d1e9adfb6971466e.scope: Deactivated successfully.
Nov 26 11:52:11 compute-0 podman[247079]: 2025-11-26 11:52:11.264741866 +0000 UTC m=+0.928849789 container died 3d7b8a1a7aa1dabf4c326a79c5caf4b4ead01018fdc228b7d1e9adfb6971466e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_maxwell, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 26 11:52:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-3dee8f37cf1315f2cd36dabe7c9bb0ed8a2db31799d9ef8c377294f75e46b17f-merged.mount: Deactivated successfully.
Nov 26 11:52:11 compute-0 podman[247079]: 2025-11-26 11:52:11.298882894 +0000 UTC m=+0.962990786 container remove 3d7b8a1a7aa1dabf4c326a79c5caf4b4ead01018fdc228b7d1e9adfb6971466e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_maxwell, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:52:11 compute-0 sudo[246916]: pam_unix(sudo:session): session closed for user root
Nov 26 11:52:11 compute-0 systemd[1]: libpod-conmon-3d7b8a1a7aa1dabf4c326a79c5caf4b4ead01018fdc228b7d1e9adfb6971466e.scope: Deactivated successfully.
Nov 26 11:52:11 compute-0 sudo[247410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:52:11 compute-0 sudo[247410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:52:11 compute-0 nova_compute[246627]: 2025-11-26 11:52:11.363 246654 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 26 11:52:11 compute-0 sudo[247410]: pam_unix(sudo:session): session closed for user root
Nov 26 11:52:11 compute-0 nova_compute[246627]: 2025-11-26 11:52:11.377 246654 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 26 11:52:11 compute-0 nova_compute[246627]: 2025-11-26 11:52:11.377 246654 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Nov 26 11:52:11 compute-0 sudo[247436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:52:11 compute-0 sudo[247436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:52:11 compute-0 sudo[247436]: pam_unix(sudo:session): session closed for user root
Nov 26 11:52:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:52:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:52:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:52:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:52:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:52:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:52:11 compute-0 sudo[247485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:52:11 compute-0 sudo[247485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:52:11 compute-0 sudo[247485]: pam_unix(sudo:session): session closed for user root
Nov 26 11:52:11 compute-0 sudo[247539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -- lvm list --format json
Nov 26 11:52:11 compute-0 sudo[247539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:52:11 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:52:11 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v509: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:52:11 compute-0 podman[247595]: 2025-11-26 11:52:11.732860487 +0000 UTC m=+0.026382056 container create 4f6df0c5b3e6e5d6875876251451116c7804afeeb822282801db9200faf97bff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_brahmagupta, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:52:11 compute-0 systemd[1]: Started libpod-conmon-4f6df0c5b3e6e5d6875876251451116c7804afeeb822282801db9200faf97bff.scope.
Nov 26 11:52:11 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:52:11 compute-0 podman[247595]: 2025-11-26 11:52:11.781934346 +0000 UTC m=+0.075455935 container init 4f6df0c5b3e6e5d6875876251451116c7804afeeb822282801db9200faf97bff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_brahmagupta, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:52:11 compute-0 podman[247595]: 2025-11-26 11:52:11.786133405 +0000 UTC m=+0.079654974 container start 4f6df0c5b3e6e5d6875876251451116c7804afeeb822282801db9200faf97bff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_brahmagupta, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507)
Nov 26 11:52:11 compute-0 podman[247595]: 2025-11-26 11:52:11.787685402 +0000 UTC m=+0.081206991 container attach 4f6df0c5b3e6e5d6875876251451116c7804afeeb822282801db9200faf97bff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_brahmagupta, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 26 11:52:11 compute-0 hardcore_brahmagupta[247629]: 167 167
Nov 26 11:52:11 compute-0 systemd[1]: libpod-4f6df0c5b3e6e5d6875876251451116c7804afeeb822282801db9200faf97bff.scope: Deactivated successfully.
Nov 26 11:52:11 compute-0 podman[247595]: 2025-11-26 11:52:11.790255448 +0000 UTC m=+0.083777018 container died 4f6df0c5b3e6e5d6875876251451116c7804afeeb822282801db9200faf97bff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:52:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-99fbb95a2ef9e8c0859d75bc2975a93b9c8f88a9b7016db79a917e65d3beef0f-merged.mount: Deactivated successfully.
Nov 26 11:52:11 compute-0 podman[247595]: 2025-11-26 11:52:11.817695188 +0000 UTC m=+0.111216758 container remove 4f6df0c5b3e6e5d6875876251451116c7804afeeb822282801db9200faf97bff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 26 11:52:11 compute-0 podman[247595]: 2025-11-26 11:52:11.721861574 +0000 UTC m=+0.015383163 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:52:11 compute-0 systemd[1]: libpod-conmon-4f6df0c5b3e6e5d6875876251451116c7804afeeb822282801db9200faf97bff.scope: Deactivated successfully.
Nov 26 11:52:11 compute-0 sudo[247699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkwargaahsfshkmunfvmssaclztfckhb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157931.4146106-1549-65353633994073/AnsiballZ_podman_container.py'
Nov 26 11:52:11 compute-0 sudo[247699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:52:11 compute-0 nova_compute[246627]: 2025-11-26 11:52:11.919 246654 INFO nova.virt.driver [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Nov 26 11:52:11 compute-0 podman[247707]: 2025-11-26 11:52:11.94297813 +0000 UTC m=+0.031145659 container create 8a5ef21766d4f325d2f4b0150033f98109351defa32b134d965a89dd8a0b9229 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_banach, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:52:11 compute-0 systemd[1]: Started libpod-conmon-8a5ef21766d4f325d2f4b0150033f98109351defa32b134d965a89dd8a0b9229.scope.
Nov 26 11:52:11 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:52:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0789a0db4ca63d809b1fad9da37434257b10f1c988098adc368ba0a0186d1d2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:52:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0789a0db4ca63d809b1fad9da37434257b10f1c988098adc368ba0a0186d1d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:52:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0789a0db4ca63d809b1fad9da37434257b10f1c988098adc368ba0a0186d1d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:52:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0789a0db4ca63d809b1fad9da37434257b10f1c988098adc368ba0a0186d1d2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:52:12 compute-0 podman[247707]: 2025-11-26 11:52:12.001975364 +0000 UTC m=+0.090142903 container init 8a5ef21766d4f325d2f4b0150033f98109351defa32b134d965a89dd8a0b9229 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_banach, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:52:12 compute-0 podman[247707]: 2025-11-26 11:52:12.007728634 +0000 UTC m=+0.095896163 container start 8a5ef21766d4f325d2f4b0150033f98109351defa32b134d965a89dd8a0b9229 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_banach, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 11:52:12 compute-0 podman[247707]: 2025-11-26 11:52:12.008876859 +0000 UTC m=+0.097044389 container attach 8a5ef21766d4f325d2f4b0150033f98109351defa32b134d965a89dd8a0b9229 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_banach, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:52:12 compute-0 podman[247707]: 2025-11-26 11:52:11.930519365 +0000 UTC m=+0.018686914 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.034 246654 INFO nova.compute.provider_config [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.046 246654 DEBUG oslo_concurrency.lockutils [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.046 246654 DEBUG oslo_concurrency.lockutils [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.046 246654 DEBUG oslo_concurrency.lockutils [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.047 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.047 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.047 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.047 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.048 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.048 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.048 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.048 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.048 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.048 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.049 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.049 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.049 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.049 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.049 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.050 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.050 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.050 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.050 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.050 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.051 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.051 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.051 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.051 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.051 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.052 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.052 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.052 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.052 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.052 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.052 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.053 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.053 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.053 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.053 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.053 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.054 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.054 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.054 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.054 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.054 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.055 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.055 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.055 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.055 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.055 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.056 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.056 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.056 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.056 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.056 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.056 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.057 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.057 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.057 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.057 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.057 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.058 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.058 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.058 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.058 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.058 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.058 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.059 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.059 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.059 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.059 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.059 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.059 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.060 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.060 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.060 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.060 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.060 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.061 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.061 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.061 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.061 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.061 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.061 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.062 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.062 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.062 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.062 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.062 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.063 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.063 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.063 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.063 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.063 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.064 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.064 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.064 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.064 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.065 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.065 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.065 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.065 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.065 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.065 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.066 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.066 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.066 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.066 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.066 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.067 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.067 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.067 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.067 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.067 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.067 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.068 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.068 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.068 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.068 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.068 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.068 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.069 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.069 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.069 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.069 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.069 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.070 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.070 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.070 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.070 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.070 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.070 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.071 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.071 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.071 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.071 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.071 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.071 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.072 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.072 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.072 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.072 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.072 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.073 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.073 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.073 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.073 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.073 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.073 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.074 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.074 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.074 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.074 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.074 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.075 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.075 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.075 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.075 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.075 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.075 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.076 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.076 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.076 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.076 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.076 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.077 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.077 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.077 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.077 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.077 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.078 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.078 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.078 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.078 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.078 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.078 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.079 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.079 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.079 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.079 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.079 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.079 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.080 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.080 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.080 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.080 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.080 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.081 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.081 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.081 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.081 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.081 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.082 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.082 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.082 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.082 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.082 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.082 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.083 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.083 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.083 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.083 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.083 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.083 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.084 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.084 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.084 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.084 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.084 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.085 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.085 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.085 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.085 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.085 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.085 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.086 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.086 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.086 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.086 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.086 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.087 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.087 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.087 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.087 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.087 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.087 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.088 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.088 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.088 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.088 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.088 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.089 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.089 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.089 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.089 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.089 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.089 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.090 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.090 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.090 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.090 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.090 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.090 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.091 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.091 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.091 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.091 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.091 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.092 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.092 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.092 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.092 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.092 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.092 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.093 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.093 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.093 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.093 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.093 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.094 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.094 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.094 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.094 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.094 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.094 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.095 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.095 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.095 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.095 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.095 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.095 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.096 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.096 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.096 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.096 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.096 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.097 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.097 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.097 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.097 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.097 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.097 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.098 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.098 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.098 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.098 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.098 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.099 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.099 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.099 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.099 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.099 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.100 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.100 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.100 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.100 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.100 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.101 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.101 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.101 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.101 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 python3.9[247701]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.101 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.101 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 rsyslogd[960]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.102 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.102 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.104 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.104 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.104 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.104 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.104 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.104 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.105 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.105 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.105 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.105 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.105 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.105 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.105 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.106 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.106 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.106 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.106 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.106 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.106 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.106 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.107 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.107 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.107 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.107 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.107 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.107 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.108 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.108 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.108 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.108 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.108 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.108 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.109 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.109 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.109 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.109 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.109 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.109 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.110 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.110 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.110 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.110 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.110 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.110 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.110 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.111 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.111 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.111 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.111 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.111 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.111 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.111 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.112 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.112 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.112 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.112 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.112 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.112 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.113 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.113 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.113 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.113 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.113 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.113 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.113 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.114 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.114 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.114 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.114 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.114 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.114 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.115 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.115 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.115 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.115 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.115 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.115 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.115 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.116 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.116 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.116 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.116 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.116 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.116 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.116 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.117 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.117 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.117 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.117 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.117 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.117 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.117 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.118 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.118 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.118 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.118 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.118 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.118 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.118 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.119 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.119 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.119 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.119 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.119 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.119 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.119 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.120 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.120 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.120 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.120 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.120 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.120 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.120 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.121 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.121 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.121 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.121 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.121 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.121 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.122 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.122 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.122 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.122 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.122 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.122 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.123 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.123 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.123 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.123 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.123 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.123 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.123 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.124 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.124 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.124 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.124 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.124 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.124 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.124 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.124 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.125 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.125 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.125 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.125 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.125 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.125 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.126 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.126 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.126 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.126 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.126 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.126 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.126 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.127 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.127 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.127 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.127 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.127 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.127 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.127 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.128 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.128 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.128 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.128 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.128 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.128 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.128 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.129 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.129 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.129 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.129 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 rsyslogd[960]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.129 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.129 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.129 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.130 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.130 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.130 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.130 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.130 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.130 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.130 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.131 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.131 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.131 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.131 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.131 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.131 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.131 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.132 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.132 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.132 246654 WARNING oslo_config.cfg [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 26 11:52:12 compute-0 nova_compute[246627]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 26 11:52:12 compute-0 nova_compute[246627]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 26 11:52:12 compute-0 nova_compute[246627]: and ``live_migration_inbound_addr`` respectively.
Nov 26 11:52:12 compute-0 nova_compute[246627]: ).  Its value may be silently ignored in the future.
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.132 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.132 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.133 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.133 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.133 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.133 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.133 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.133 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.134 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.134 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.134 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.134 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.134 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.134 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.134 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.135 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.135 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.135 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.135 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.rbd_secret_uuid        = ebab460c-3fd7-5f66-aa87-e10c143123f7 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.135 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.135 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.135 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.136 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.136 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.136 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.136 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.136 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.136 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.136 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.137 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.137 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.137 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.137 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.137 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.137 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.138 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.138 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.138 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.138 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.138 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.138 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.138 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.139 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.139 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.139 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.139 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.139 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.140 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.140 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.140 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.140 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.140 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.140 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.141 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.141 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.141 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.141 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.141 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.141 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.141 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.141 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.142 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.142 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.142 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.142 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.142 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.142 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.142 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.143 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.143 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.143 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.143 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.143 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.143 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.143 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.144 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.144 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.144 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.144 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.144 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.144 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.144 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.145 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.145 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.145 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.145 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.145 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.146 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.146 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.146 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.146 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.146 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.146 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.146 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.147 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.147 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.147 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.147 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.147 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.147 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.147 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.148 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.148 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.148 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.148 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.148 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.148 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.148 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.149 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.149 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.149 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.149 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.149 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.149 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.149 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.150 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.150 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.150 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.150 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.150 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.150 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.150 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.151 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.151 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.151 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.151 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.151 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.151 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.151 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.152 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.152 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.152 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.152 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.152 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.152 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.152 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.153 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.153 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.153 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.153 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.153 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.153 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.154 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.154 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.154 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.154 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.154 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.154 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.154 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.155 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.155 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.155 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.155 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.155 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.155 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.155 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.156 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.156 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.156 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.156 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.157 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.157 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.158 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.158 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.158 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.158 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.158 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.158 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.158 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.159 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.159 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.159 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.159 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.159 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.160 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.160 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.160 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.160 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.160 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.160 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.161 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.161 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.161 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.161 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.161 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.161 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.161 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.162 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.162 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.162 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.162 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.162 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.166 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.166 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.166 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.167 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.167 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.167 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.167 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.167 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.167 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.168 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.168 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.168 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.168 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.168 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.168 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.169 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.169 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.169 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.169 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.169 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.169 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.170 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.170 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.170 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.170 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.170 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.170 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.170 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.171 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.171 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.171 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.171 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.171 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.171 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.171 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.172 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.172 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.172 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.172 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.172 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.172 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.172 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.173 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.173 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.173 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.173 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.173 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.173 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.173 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.174 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.174 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.174 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.174 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.174 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.174 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.174 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.175 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.175 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.175 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.175 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.175 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.176 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.176 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.176 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.176 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.176 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.176 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.176 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.177 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.177 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.177 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.177 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.177 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.177 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.177 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.177 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.178 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.178 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.178 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.178 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.178 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.178 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.178 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.179 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.179 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.179 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.179 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.179 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.179 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.179 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.180 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.180 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.180 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.180 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.180 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.180 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.181 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.181 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.181 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.181 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.181 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.181 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.182 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.182 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.182 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.182 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.182 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.182 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.182 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.183 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.183 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.183 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.183 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.183 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.184 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.184 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.184 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.184 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.184 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.185 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.185 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.185 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.185 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.185 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.185 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.185 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.186 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.186 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.186 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.186 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.186 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.186 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.186 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.187 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.187 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.187 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.187 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.187 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.187 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.187 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.188 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.188 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.188 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.188 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.188 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.188 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.188 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.189 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.189 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.189 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.189 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.189 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.189 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.190 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.190 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.190 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.190 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.190 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.190 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.190 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.190 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.191 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.191 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.191 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.191 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.191 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.191 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.191 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.192 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.192 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.192 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.192 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.192 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.192 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.192 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.193 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.193 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.193 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.193 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.193 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.193 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.194 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.194 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.194 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.194 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.194 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.194 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.194 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.195 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.195 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.195 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.195 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.195 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.196 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.196 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.196 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.196 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.196 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.196 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.197 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.197 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.197 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.197 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.197 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.197 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.198 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.198 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.198 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.198 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.198 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.198 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.198 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.199 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.199 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.199 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.199 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.199 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.199 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.199 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.199 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.200 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.200 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.200 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.200 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] privsep_osbrick.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.200 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.200 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.201 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.201 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.201 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.201 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] nova_sys_admin.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.201 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.201 246654 DEBUG oslo_service.service [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.202 246654 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.213 246654 DEBUG nova.virt.libvirt.host [None req-26fb6dc9-7549-4195-bc95-44be0bc6bde6 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.214 246654 DEBUG nova.virt.libvirt.host [None req-26fb6dc9-7549-4195-bc95-44be0bc6bde6 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.214 246654 DEBUG nova.virt.libvirt.host [None req-26fb6dc9-7549-4195-bc95-44be0bc6bde6 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.214 246654 DEBUG nova.virt.libvirt.host [None req-26fb6dc9-7549-4195-bc95-44be0bc6bde6 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Nov 26 11:52:12 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Nov 26 11:52:12 compute-0 sudo[247699]: pam_unix(sudo:session): session closed for user root
Nov 26 11:52:12 compute-0 systemd[1]: Started libvirt QEMU daemon.
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.267 246654 DEBUG nova.virt.libvirt.host [None req-26fb6dc9-7549-4195-bc95-44be0bc6bde6 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fb8b26db310> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.269 246654 DEBUG nova.virt.libvirt.host [None req-26fb6dc9-7549-4195-bc95-44be0bc6bde6 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fb8b26db310> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.270 246654 INFO nova.virt.libvirt.driver [None req-26fb6dc9-7549-4195-bc95-44be0bc6bde6 - - - - - -] Connection event '1' reason 'None'
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.279 246654 WARNING nova.virt.libvirt.driver [None req-26fb6dc9-7549-4195-bc95-44be0bc6bde6 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.279 246654 DEBUG nova.virt.libvirt.volume.mount [None req-26fb6dc9-7549-4195-bc95-44be0bc6bde6 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Nov 26 11:52:12 compute-0 sudo[247947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxnhfeywvuixtuixkfxpbhrcykyowgva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157932.3887608-1557-119898892289082/AnsiballZ_systemd.py'
Nov 26 11:52:12 compute-0 sudo[247947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:52:12 compute-0 nifty_banach[247721]: {
Nov 26 11:52:12 compute-0 nifty_banach[247721]:     "0": [
Nov 26 11:52:12 compute-0 nifty_banach[247721]:         {
Nov 26 11:52:12 compute-0 nifty_banach[247721]:             "devices": [
Nov 26 11:52:12 compute-0 nifty_banach[247721]:                 "/dev/loop3"
Nov 26 11:52:12 compute-0 nifty_banach[247721]:             ],
Nov 26 11:52:12 compute-0 nifty_banach[247721]:             "lv_name": "ceph_lv0",
Nov 26 11:52:12 compute-0 nifty_banach[247721]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:52:12 compute-0 nifty_banach[247721]:             "lv_size": "21470642176",
Nov 26 11:52:12 compute-0 nifty_banach[247721]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a9ad59a0-aa2e-4d92-b571-519d2d145b6a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:52:12 compute-0 nifty_banach[247721]:             "lv_uuid": "Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ",
Nov 26 11:52:12 compute-0 nifty_banach[247721]:             "name": "ceph_lv0",
Nov 26 11:52:12 compute-0 nifty_banach[247721]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:52:12 compute-0 nifty_banach[247721]:             "tags": {
Nov 26 11:52:12 compute-0 nifty_banach[247721]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:52:12 compute-0 nifty_banach[247721]:                 "ceph.block_uuid": "Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ",
Nov 26 11:52:12 compute-0 nifty_banach[247721]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:52:12 compute-0 nifty_banach[247721]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:52:12 compute-0 nifty_banach[247721]:                 "ceph.cluster_name": "ceph",
Nov 26 11:52:12 compute-0 nifty_banach[247721]:                 "ceph.crush_device_class": "",
Nov 26 11:52:12 compute-0 nifty_banach[247721]:                 "ceph.encrypted": "0",
Nov 26 11:52:12 compute-0 nifty_banach[247721]:                 "ceph.osd_fsid": "a9ad59a0-aa2e-4d92-b571-519d2d145b6a",
Nov 26 11:52:12 compute-0 nifty_banach[247721]:                 "ceph.osd_id": "0",
Nov 26 11:52:12 compute-0 nifty_banach[247721]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:52:12 compute-0 nifty_banach[247721]:                 "ceph.type": "block",
Nov 26 11:52:12 compute-0 nifty_banach[247721]:                 "ceph.vdo": "0"
Nov 26 11:52:12 compute-0 nifty_banach[247721]:             },
Nov 26 11:52:12 compute-0 nifty_banach[247721]:             "type": "block",
Nov 26 11:52:12 compute-0 nifty_banach[247721]:             "vg_name": "ceph_vg0"
Nov 26 11:52:12 compute-0 nifty_banach[247721]:         }
Nov 26 11:52:12 compute-0 nifty_banach[247721]:     ],
Nov 26 11:52:12 compute-0 nifty_banach[247721]:     "1": [
Nov 26 11:52:12 compute-0 nifty_banach[247721]:         {
Nov 26 11:52:12 compute-0 nifty_banach[247721]:             "devices": [
Nov 26 11:52:12 compute-0 nifty_banach[247721]:                 "/dev/loop4"
Nov 26 11:52:12 compute-0 nifty_banach[247721]:             ],
Nov 26 11:52:12 compute-0 nifty_banach[247721]:             "lv_name": "ceph_lv1",
Nov 26 11:52:12 compute-0 nifty_banach[247721]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:52:12 compute-0 nifty_banach[247721]:             "lv_size": "21470642176",
Nov 26 11:52:12 compute-0 nifty_banach[247721]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2627095b-eef8-4027-bfef-68bf7cb6801f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:52:12 compute-0 nifty_banach[247721]:             "lv_uuid": "T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS",
Nov 26 11:52:12 compute-0 nifty_banach[247721]:             "name": "ceph_lv1",
Nov 26 11:52:12 compute-0 nifty_banach[247721]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:52:12 compute-0 nifty_banach[247721]:             "tags": {
Nov 26 11:52:12 compute-0 nifty_banach[247721]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:52:12 compute-0 nifty_banach[247721]:                 "ceph.block_uuid": "T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS",
Nov 26 11:52:12 compute-0 nifty_banach[247721]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:52:12 compute-0 nifty_banach[247721]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:52:12 compute-0 nifty_banach[247721]:                 "ceph.cluster_name": "ceph",
Nov 26 11:52:12 compute-0 nifty_banach[247721]:                 "ceph.crush_device_class": "",
Nov 26 11:52:12 compute-0 nifty_banach[247721]:                 "ceph.encrypted": "0",
Nov 26 11:52:12 compute-0 nifty_banach[247721]:                 "ceph.osd_fsid": "2627095b-eef8-4027-bfef-68bf7cb6801f",
Nov 26 11:52:12 compute-0 nifty_banach[247721]:                 "ceph.osd_id": "1",
Nov 26 11:52:12 compute-0 nifty_banach[247721]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:52:12 compute-0 nifty_banach[247721]:                 "ceph.type": "block",
Nov 26 11:52:12 compute-0 nifty_banach[247721]:                 "ceph.vdo": "0"
Nov 26 11:52:12 compute-0 nifty_banach[247721]:             },
Nov 26 11:52:12 compute-0 nifty_banach[247721]:             "type": "block",
Nov 26 11:52:12 compute-0 nifty_banach[247721]:             "vg_name": "ceph_vg1"
Nov 26 11:52:12 compute-0 nifty_banach[247721]:         }
Nov 26 11:52:12 compute-0 nifty_banach[247721]:     ],
Nov 26 11:52:12 compute-0 nifty_banach[247721]:     "2": [
Nov 26 11:52:12 compute-0 nifty_banach[247721]:         {
Nov 26 11:52:12 compute-0 nifty_banach[247721]:             "devices": [
Nov 26 11:52:12 compute-0 nifty_banach[247721]:                 "/dev/loop5"
Nov 26 11:52:12 compute-0 nifty_banach[247721]:             ],
Nov 26 11:52:12 compute-0 nifty_banach[247721]:             "lv_name": "ceph_lv2",
Nov 26 11:52:12 compute-0 nifty_banach[247721]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:52:12 compute-0 nifty_banach[247721]:             "lv_size": "21470642176",
Nov 26 11:52:12 compute-0 nifty_banach[247721]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d56156fb-7361-4bef-b06b-1320109b4323,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:52:12 compute-0 nifty_banach[247721]:             "lv_uuid": "PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF",
Nov 26 11:52:12 compute-0 nifty_banach[247721]:             "name": "ceph_lv2",
Nov 26 11:52:12 compute-0 nifty_banach[247721]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:52:12 compute-0 nifty_banach[247721]:             "tags": {
Nov 26 11:52:12 compute-0 nifty_banach[247721]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:52:12 compute-0 nifty_banach[247721]:                 "ceph.block_uuid": "PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF",
Nov 26 11:52:12 compute-0 nifty_banach[247721]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:52:12 compute-0 nifty_banach[247721]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:52:12 compute-0 nifty_banach[247721]:                 "ceph.cluster_name": "ceph",
Nov 26 11:52:12 compute-0 nifty_banach[247721]:                 "ceph.crush_device_class": "",
Nov 26 11:52:12 compute-0 nifty_banach[247721]:                 "ceph.encrypted": "0",
Nov 26 11:52:12 compute-0 nifty_banach[247721]:                 "ceph.osd_fsid": "d56156fb-7361-4bef-b06b-1320109b4323",
Nov 26 11:52:12 compute-0 nifty_banach[247721]:                 "ceph.osd_id": "2",
Nov 26 11:52:12 compute-0 nifty_banach[247721]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:52:12 compute-0 nifty_banach[247721]:                 "ceph.type": "block",
Nov 26 11:52:12 compute-0 nifty_banach[247721]:                 "ceph.vdo": "0"
Nov 26 11:52:12 compute-0 nifty_banach[247721]:             },
Nov 26 11:52:12 compute-0 nifty_banach[247721]:             "type": "block",
Nov 26 11:52:12 compute-0 nifty_banach[247721]:             "vg_name": "ceph_vg2"
Nov 26 11:52:12 compute-0 nifty_banach[247721]:         }
Nov 26 11:52:12 compute-0 nifty_banach[247721]:     ]
Nov 26 11:52:12 compute-0 nifty_banach[247721]: }
Nov 26 11:52:12 compute-0 systemd[1]: libpod-8a5ef21766d4f325d2f4b0150033f98109351defa32b134d965a89dd8a0b9229.scope: Deactivated successfully.
Nov 26 11:52:12 compute-0 podman[247707]: 2025-11-26 11:52:12.659659017 +0000 UTC m=+0.747826556 container died 8a5ef21766d4f325d2f4b0150033f98109351defa32b134d965a89dd8a0b9229 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:52:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-d0789a0db4ca63d809b1fad9da37434257b10f1c988098adc368ba0a0186d1d2-merged.mount: Deactivated successfully.
Nov 26 11:52:12 compute-0 podman[247707]: 2025-11-26 11:52:12.705889434 +0000 UTC m=+0.794056963 container remove 8a5ef21766d4f325d2f4b0150033f98109351defa32b134d965a89dd8a0b9229 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_banach, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:52:12 compute-0 systemd[1]: libpod-conmon-8a5ef21766d4f325d2f4b0150033f98109351defa32b134d965a89dd8a0b9229.scope: Deactivated successfully.
Nov 26 11:52:12 compute-0 sudo[247539]: pam_unix(sudo:session): session closed for user root
Nov 26 11:52:12 compute-0 ceph-mon[74928]: pgmap v509: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:52:12 compute-0 sudo[247970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:52:12 compute-0 sudo[247970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:52:12 compute-0 sudo[247970]: pam_unix(sudo:session): session closed for user root
Nov 26 11:52:12 compute-0 sudo[247995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:52:12 compute-0 sudo[247995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:52:12 compute-0 sudo[247995]: pam_unix(sudo:session): session closed for user root
Nov 26 11:52:12 compute-0 python3.9[247949]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 11:52:12 compute-0 sudo[248020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:52:12 compute-0 sudo[248020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:52:12 compute-0 sudo[248020]: pam_unix(sudo:session): session closed for user root
Nov 26 11:52:12 compute-0 systemd[1]: Stopping nova_compute container...
Nov 26 11:52:12 compute-0 sudo[248047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -- raw list --format json
Nov 26 11:52:12 compute-0 sudo[248047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.920 246654 DEBUG oslo_concurrency.lockutils [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.921 246654 DEBUG oslo_concurrency.lockutils [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 26 11:52:12 compute-0 nova_compute[246627]: 2025-11-26 11:52:12.921 246654 DEBUG oslo_concurrency.lockutils [None req-38c67b84-ca8d-46c3-a8d9-f0418f39e0e4 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 26 11:52:13 compute-0 podman[248116]: 2025-11-26 11:52:13.142840123 +0000 UTC m=+0.028431581 container create 6ebca7dc7b6eee52e5ca46c5f402d78646eb31b4ce523f441c7e43f7e3a6de5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 26 11:52:13 compute-0 systemd[1]: Started libpod-conmon-6ebca7dc7b6eee52e5ca46c5f402d78646eb31b4ce523f441c7e43f7e3a6de5e.scope.
Nov 26 11:52:13 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:52:13 compute-0 podman[248116]: 2025-11-26 11:52:13.196889777 +0000 UTC m=+0.082481245 container init 6ebca7dc7b6eee52e5ca46c5f402d78646eb31b4ce523f441c7e43f7e3a6de5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_brown, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 26 11:52:13 compute-0 podman[248116]: 2025-11-26 11:52:13.201232356 +0000 UTC m=+0.086823814 container start 6ebca7dc7b6eee52e5ca46c5f402d78646eb31b4ce523f441c7e43f7e3a6de5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_brown, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 26 11:52:13 compute-0 podman[248116]: 2025-11-26 11:52:13.202531456 +0000 UTC m=+0.088122914 container attach 6ebca7dc7b6eee52e5ca46c5f402d78646eb31b4ce523f441c7e43f7e3a6de5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_brown, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Nov 26 11:52:13 compute-0 priceless_brown[248129]: 167 167
Nov 26 11:52:13 compute-0 systemd[1]: libpod-6ebca7dc7b6eee52e5ca46c5f402d78646eb31b4ce523f441c7e43f7e3a6de5e.scope: Deactivated successfully.
Nov 26 11:52:13 compute-0 podman[248116]: 2025-11-26 11:52:13.205179279 +0000 UTC m=+0.090770747 container died 6ebca7dc7b6eee52e5ca46c5f402d78646eb31b4ce523f441c7e43f7e3a6de5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_brown, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:52:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-a2af8245cd4d8998acec6be4a0395567d17b1c507695265e15a8afc7318382d5-merged.mount: Deactivated successfully.
Nov 26 11:52:13 compute-0 podman[248116]: 2025-11-26 11:52:13.130067345 +0000 UTC m=+0.015658823 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:52:13 compute-0 podman[248116]: 2025-11-26 11:52:13.233814153 +0000 UTC m=+0.119405611 container remove 6ebca7dc7b6eee52e5ca46c5f402d78646eb31b4ce523f441c7e43f7e3a6de5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_brown, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:52:13 compute-0 virtqemud[247765]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Nov 26 11:52:13 compute-0 virtqemud[247765]: hostname: compute-0
Nov 26 11:52:13 compute-0 virtqemud[247765]: End of file while reading data: Input/output error
Nov 26 11:52:13 compute-0 systemd[1]: libpod-ef587bf8366fd71bc847ac7103a8684e40edd037254dd6eab3d7216a89ea1832.scope: Deactivated successfully.
Nov 26 11:52:13 compute-0 systemd[1]: libpod-ef587bf8366fd71bc847ac7103a8684e40edd037254dd6eab3d7216a89ea1832.scope: Consumed 2.436s CPU time.
Nov 26 11:52:13 compute-0 podman[248054]: 2025-11-26 11:52:13.242818984 +0000 UTC m=+0.350240193 container died ef587bf8366fd71bc847ac7103a8684e40edd037254dd6eab3d7216a89ea1832 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=edpm, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:52:13 compute-0 systemd[1]: libpod-conmon-6ebca7dc7b6eee52e5ca46c5f402d78646eb31b4ce523f441c7e43f7e3a6de5e.scope: Deactivated successfully.
Nov 26 11:52:13 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ef587bf8366fd71bc847ac7103a8684e40edd037254dd6eab3d7216a89ea1832-userdata-shm.mount: Deactivated successfully.
Nov 26 11:52:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f3e688ab49987e793038b44b4ae7ebdf4f0659aafe1a1226131f39019fac7ba-merged.mount: Deactivated successfully.
Nov 26 11:52:13 compute-0 podman[248054]: 2025-11-26 11:52:13.577296671 +0000 UTC m=+0.684717879 container cleanup ef587bf8366fd71bc847ac7103a8684e40edd037254dd6eab3d7216a89ea1832 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, container_name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 26 11:52:13 compute-0 podman[248054]: nova_compute
Nov 26 11:52:13 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v510: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:52:13 compute-0 podman[248162]: 2025-11-26 11:52:13.592748801 +0000 UTC m=+0.256994159 container create 3372dae3b9a4c9d0fd53b8283aa0b772d8854e7dd44ec484e4aab95015f894a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_kare, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:52:13 compute-0 podman[248172]: nova_compute
Nov 26 11:52:13 compute-0 systemd[1]: Started libpod-conmon-3372dae3b9a4c9d0fd53b8283aa0b772d8854e7dd44ec484e4aab95015f894a6.scope.
Nov 26 11:52:13 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Nov 26 11:52:13 compute-0 systemd[1]: Stopped nova_compute container.
Nov 26 11:52:13 compute-0 systemd[1]: Starting nova_compute container...
Nov 26 11:52:13 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:52:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/625a3a2003bad1fcbdd210c1b6517e6c3d9ae882e5cacc341cb315e002ffaf2d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:52:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/625a3a2003bad1fcbdd210c1b6517e6c3d9ae882e5cacc341cb315e002ffaf2d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:52:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/625a3a2003bad1fcbdd210c1b6517e6c3d9ae882e5cacc341cb315e002ffaf2d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:52:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/625a3a2003bad1fcbdd210c1b6517e6c3d9ae882e5cacc341cb315e002ffaf2d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:52:13 compute-0 podman[248162]: 2025-11-26 11:52:13.580070571 +0000 UTC m=+0.244315940 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:52:13 compute-0 podman[248162]: 2025-11-26 11:52:13.650722074 +0000 UTC m=+0.314967442 container init 3372dae3b9a4c9d0fd53b8283aa0b772d8854e7dd44ec484e4aab95015f894a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:52:13 compute-0 podman[248162]: 2025-11-26 11:52:13.655587499 +0000 UTC m=+0.319832847 container start 3372dae3b9a4c9d0fd53b8283aa0b772d8854e7dd44ec484e4aab95015f894a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_kare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 26 11:52:13 compute-0 podman[248162]: 2025-11-26 11:52:13.659724652 +0000 UTC m=+0.323970010 container attach 3372dae3b9a4c9d0fd53b8283aa0b772d8854e7dd44ec484e4aab95015f894a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_kare, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:52:13 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:52:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f3e688ab49987e793038b44b4ae7ebdf4f0659aafe1a1226131f39019fac7ba/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 26 11:52:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f3e688ab49987e793038b44b4ae7ebdf4f0659aafe1a1226131f39019fac7ba/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 26 11:52:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f3e688ab49987e793038b44b4ae7ebdf4f0659aafe1a1226131f39019fac7ba/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 26 11:52:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f3e688ab49987e793038b44b4ae7ebdf4f0659aafe1a1226131f39019fac7ba/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 26 11:52:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f3e688ab49987e793038b44b4ae7ebdf4f0659aafe1a1226131f39019fac7ba/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 26 11:52:13 compute-0 podman[248187]: 2025-11-26 11:52:13.711808277 +0000 UTC m=+0.074455367 container init ef587bf8366fd71bc847ac7103a8684e40edd037254dd6eab3d7216a89ea1832 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team)
Nov 26 11:52:13 compute-0 podman[248187]: 2025-11-26 11:52:13.716124337 +0000 UTC m=+0.078771427 container start ef587bf8366fd71bc847ac7103a8684e40edd037254dd6eab3d7216a89ea1832 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=nova_compute, io.buildah.version=1.41.3)
Nov 26 11:52:13 compute-0 podman[248187]: nova_compute
Nov 26 11:52:13 compute-0 nova_compute[248203]: + sudo -E kolla_set_configs
Nov 26 11:52:13 compute-0 systemd[1]: Started nova_compute container.
Nov 26 11:52:13 compute-0 sudo[247947]: pam_unix(sudo:session): session closed for user root
Nov 26 11:52:13 compute-0 nova_compute[248203]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 26 11:52:13 compute-0 nova_compute[248203]: INFO:__main__:Validating config file
Nov 26 11:52:13 compute-0 nova_compute[248203]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 26 11:52:13 compute-0 nova_compute[248203]: INFO:__main__:Copying service configuration files
Nov 26 11:52:13 compute-0 nova_compute[248203]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 26 11:52:13 compute-0 nova_compute[248203]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 26 11:52:13 compute-0 nova_compute[248203]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 26 11:52:13 compute-0 nova_compute[248203]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Nov 26 11:52:13 compute-0 nova_compute[248203]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 26 11:52:13 compute-0 nova_compute[248203]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 26 11:52:13 compute-0 nova_compute[248203]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 26 11:52:13 compute-0 nova_compute[248203]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 26 11:52:13 compute-0 nova_compute[248203]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 26 11:52:13 compute-0 nova_compute[248203]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 26 11:52:13 compute-0 nova_compute[248203]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 26 11:52:13 compute-0 nova_compute[248203]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 26 11:52:13 compute-0 nova_compute[248203]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Nov 26 11:52:13 compute-0 nova_compute[248203]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 26 11:52:13 compute-0 nova_compute[248203]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 26 11:52:13 compute-0 nova_compute[248203]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 26 11:52:13 compute-0 nova_compute[248203]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 26 11:52:13 compute-0 nova_compute[248203]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 26 11:52:13 compute-0 nova_compute[248203]: INFO:__main__:Deleting /etc/ceph
Nov 26 11:52:13 compute-0 nova_compute[248203]: INFO:__main__:Creating directory /etc/ceph
Nov 26 11:52:13 compute-0 nova_compute[248203]: INFO:__main__:Setting permission for /etc/ceph
Nov 26 11:52:13 compute-0 nova_compute[248203]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 26 11:52:13 compute-0 nova_compute[248203]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 26 11:52:13 compute-0 nova_compute[248203]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 26 11:52:13 compute-0 nova_compute[248203]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 26 11:52:13 compute-0 nova_compute[248203]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Nov 26 11:52:13 compute-0 nova_compute[248203]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 26 11:52:13 compute-0 nova_compute[248203]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 26 11:52:13 compute-0 nova_compute[248203]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Nov 26 11:52:13 compute-0 nova_compute[248203]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 26 11:52:13 compute-0 nova_compute[248203]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 26 11:52:13 compute-0 nova_compute[248203]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 26 11:52:13 compute-0 nova_compute[248203]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 26 11:52:13 compute-0 nova_compute[248203]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 26 11:52:13 compute-0 nova_compute[248203]: INFO:__main__:Writing out command to execute
Nov 26 11:52:13 compute-0 nova_compute[248203]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 26 11:52:13 compute-0 nova_compute[248203]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 26 11:52:13 compute-0 nova_compute[248203]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 26 11:52:13 compute-0 nova_compute[248203]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 26 11:52:13 compute-0 nova_compute[248203]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 26 11:52:13 compute-0 nova_compute[248203]: ++ cat /run_command
Nov 26 11:52:13 compute-0 nova_compute[248203]: + CMD=nova-compute
Nov 26 11:52:13 compute-0 nova_compute[248203]: + ARGS=
Nov 26 11:52:13 compute-0 nova_compute[248203]: + sudo kolla_copy_cacerts
Nov 26 11:52:13 compute-0 nova_compute[248203]: + [[ ! -n '' ]]
Nov 26 11:52:13 compute-0 nova_compute[248203]: + . kolla_extend_start
Nov 26 11:52:13 compute-0 nova_compute[248203]: + echo 'Running command: '\''nova-compute'\'''
Nov 26 11:52:13 compute-0 nova_compute[248203]: Running command: 'nova-compute'
Nov 26 11:52:13 compute-0 nova_compute[248203]: + umask 0022
Nov 26 11:52:13 compute-0 nova_compute[248203]: + exec nova-compute
Nov 26 11:52:14 compute-0 sudo[248364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdsozrtsjnubzxlbpoeeltvixuxbxvtl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764157933.9094355-1566-121722550098465/AnsiballZ_podman_container.py'
Nov 26 11:52:14 compute-0 sudo[248364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:52:14 compute-0 python3.9[248366]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 26 11:52:14 compute-0 lucid_kare[248185]: {
Nov 26 11:52:14 compute-0 lucid_kare[248185]:     "2627095b-eef8-4027-bfef-68bf7cb6801f": {
Nov 26 11:52:14 compute-0 lucid_kare[248185]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:52:14 compute-0 lucid_kare[248185]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 11:52:14 compute-0 lucid_kare[248185]:         "osd_id": 1,
Nov 26 11:52:14 compute-0 lucid_kare[248185]:         "osd_uuid": "2627095b-eef8-4027-bfef-68bf7cb6801f",
Nov 26 11:52:14 compute-0 lucid_kare[248185]:         "type": "bluestore"
Nov 26 11:52:14 compute-0 lucid_kare[248185]:     },
Nov 26 11:52:14 compute-0 lucid_kare[248185]:     "a9ad59a0-aa2e-4d92-b571-519d2d145b6a": {
Nov 26 11:52:14 compute-0 lucid_kare[248185]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:52:14 compute-0 lucid_kare[248185]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 11:52:14 compute-0 lucid_kare[248185]:         "osd_id": 0,
Nov 26 11:52:14 compute-0 lucid_kare[248185]:         "osd_uuid": "a9ad59a0-aa2e-4d92-b571-519d2d145b6a",
Nov 26 11:52:14 compute-0 lucid_kare[248185]:         "type": "bluestore"
Nov 26 11:52:14 compute-0 lucid_kare[248185]:     },
Nov 26 11:52:14 compute-0 lucid_kare[248185]:     "d56156fb-7361-4bef-b06b-1320109b4323": {
Nov 26 11:52:14 compute-0 lucid_kare[248185]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:52:14 compute-0 lucid_kare[248185]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 11:52:14 compute-0 lucid_kare[248185]:         "osd_id": 2,
Nov 26 11:52:14 compute-0 lucid_kare[248185]:         "osd_uuid": "d56156fb-7361-4bef-b06b-1320109b4323",
Nov 26 11:52:14 compute-0 lucid_kare[248185]:         "type": "bluestore"
Nov 26 11:52:14 compute-0 lucid_kare[248185]:     }
Nov 26 11:52:14 compute-0 lucid_kare[248185]: }
Nov 26 11:52:14 compute-0 systemd[1]: Started libpod-conmon-7286887c5bc189491a9ce67c729d49652f375cb93b520d1b03714580ccb97df8.scope.
Nov 26 11:52:14 compute-0 systemd[1]: libpod-3372dae3b9a4c9d0fd53b8283aa0b772d8854e7dd44ec484e4aab95015f894a6.scope: Deactivated successfully.
Nov 26 11:52:14 compute-0 podman[248162]: 2025-11-26 11:52:14.439817838 +0000 UTC m=+1.104063186 container died 3372dae3b9a4c9d0fd53b8283aa0b772d8854e7dd44ec484e4aab95015f894a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_kare, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Nov 26 11:52:14 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:52:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-625a3a2003bad1fcbdd210c1b6517e6c3d9ae882e5cacc341cb315e002ffaf2d-merged.mount: Deactivated successfully.
Nov 26 11:52:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eba4365c154694e5369744f35231320fbfe7143c6a91240690b6cec01199e7ed/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Nov 26 11:52:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eba4365c154694e5369744f35231320fbfe7143c6a91240690b6cec01199e7ed/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Nov 26 11:52:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eba4365c154694e5369744f35231320fbfe7143c6a91240690b6cec01199e7ed/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 26 11:52:14 compute-0 podman[248408]: 2025-11-26 11:52:14.470464024 +0000 UTC m=+0.080445193 container init 7286887c5bc189491a9ce67c729d49652f375cb93b520d1b03714580ccb97df8 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute_init, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init)
Nov 26 11:52:14 compute-0 podman[248408]: 2025-11-26 11:52:14.475799126 +0000 UTC m=+0.085780304 container start 7286887c5bc189491a9ce67c729d49652f375cb93b520d1b03714580ccb97df8 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute_init, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 26 11:52:14 compute-0 python3.9[248366]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Nov 26 11:52:14 compute-0 podman[248162]: 2025-11-26 11:52:14.483491332 +0000 UTC m=+1.147736680 container remove 3372dae3b9a4c9d0fd53b8283aa0b772d8854e7dd44ec484e4aab95015f894a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_kare, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS)
Nov 26 11:52:14 compute-0 systemd[1]: libpod-conmon-3372dae3b9a4c9d0fd53b8283aa0b772d8854e7dd44ec484e4aab95015f894a6.scope: Deactivated successfully.
Nov 26 11:52:14 compute-0 sudo[248047]: pam_unix(sudo:session): session closed for user root
Nov 26 11:52:14 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 11:52:14 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:52:14 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 11:52:14 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:52:14 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 37be3844-cbb3-4722-9ad4-c87161dbc79b does not exist
Nov 26 11:52:14 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 3b89229e-fc19-43f2-99e3-8a8be8186963 does not exist
Nov 26 11:52:14 compute-0 nova_compute_init[248439]: INFO:nova_statedir:Applying nova statedir ownership
Nov 26 11:52:14 compute-0 nova_compute_init[248439]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Nov 26 11:52:14 compute-0 nova_compute_init[248439]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Nov 26 11:52:14 compute-0 nova_compute_init[248439]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Nov 26 11:52:14 compute-0 nova_compute_init[248439]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Nov 26 11:52:14 compute-0 nova_compute_init[248439]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Nov 26 11:52:14 compute-0 nova_compute_init[248439]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Nov 26 11:52:14 compute-0 nova_compute_init[248439]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Nov 26 11:52:14 compute-0 nova_compute_init[248439]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Nov 26 11:52:14 compute-0 nova_compute_init[248439]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Nov 26 11:52:14 compute-0 nova_compute_init[248439]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Nov 26 11:52:14 compute-0 nova_compute_init[248439]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Nov 26 11:52:14 compute-0 nova_compute_init[248439]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Nov 26 11:52:14 compute-0 nova_compute_init[248439]: INFO:nova_statedir:Nova statedir ownership complete
Nov 26 11:52:14 compute-0 systemd[1]: libpod-7286887c5bc189491a9ce67c729d49652f375cb93b520d1b03714580ccb97df8.scope: Deactivated successfully.
Nov 26 11:52:14 compute-0 podman[248455]: 2025-11-26 11:52:14.567160485 +0000 UTC m=+0.020108855 container died 7286887c5bc189491a9ce67c729d49652f375cb93b520d1b03714580ccb97df8 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute_init, config_id=edpm, container_name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 26 11:52:14 compute-0 sudo[248449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:52:14 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7286887c5bc189491a9ce67c729d49652f375cb93b520d1b03714580ccb97df8-userdata-shm.mount: Deactivated successfully.
Nov 26 11:52:14 compute-0 sudo[248449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:52:14 compute-0 podman[248455]: 2025-11-26 11:52:14.592925305 +0000 UTC m=+0.045873655 container cleanup 7286887c5bc189491a9ce67c729d49652f375cb93b520d1b03714580ccb97df8 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute_init, container_name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=edpm)
Nov 26 11:52:14 compute-0 sudo[248449]: pam_unix(sudo:session): session closed for user root
Nov 26 11:52:14 compute-0 systemd[1]: libpod-conmon-7286887c5bc189491a9ce67c729d49652f375cb93b520d1b03714580ccb97df8.scope: Deactivated successfully.
Nov 26 11:52:14 compute-0 sudo[248364]: pam_unix(sudo:session): session closed for user root
Nov 26 11:52:14 compute-0 sudo[248496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 26 11:52:14 compute-0 sudo[248496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:52:14 compute-0 sudo[248496]: pam_unix(sudo:session): session closed for user root
Nov 26 11:52:14 compute-0 ceph-mon[74928]: pgmap v510: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:52:14 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:52:14 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:52:15 compute-0 sshd-session[218414]: Connection closed by 192.168.122.30 port 39404
Nov 26 11:52:15 compute-0 sshd-session[218411]: pam_unix(sshd:session): session closed for user zuul
Nov 26 11:52:15 compute-0 systemd[1]: session-49.scope: Deactivated successfully.
Nov 26 11:52:15 compute-0 systemd[1]: session-49.scope: Consumed 1min 38.618s CPU time.
Nov 26 11:52:15 compute-0 systemd-logind[744]: Session 49 logged out. Waiting for processes to exit.
Nov 26 11:52:15 compute-0 systemd-logind[744]: Removed session 49.
Nov 26 11:52:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-eba4365c154694e5369744f35231320fbfe7143c6a91240690b6cec01199e7ed-merged.mount: Deactivated successfully.
Nov 26 11:52:15 compute-0 nova_compute[248203]: 2025-11-26 11:52:15.490 248207 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 26 11:52:15 compute-0 nova_compute[248203]: 2025-11-26 11:52:15.492 248207 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 26 11:52:15 compute-0 nova_compute[248203]: 2025-11-26 11:52:15.493 248207 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 26 11:52:15 compute-0 nova_compute[248203]: 2025-11-26 11:52:15.493 248207 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Nov 26 11:52:15 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v511: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:52:15 compute-0 nova_compute[248203]: 2025-11-26 11:52:15.606 248207 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 26 11:52:15 compute-0 nova_compute[248203]: 2025-11-26 11:52:15.617 248207 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 26 11:52:15 compute-0 nova_compute[248203]: 2025-11-26 11:52:15.617 248207 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.008 248207 INFO nova.virt.driver [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.092 248207 INFO nova.compute.provider_config [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.105 248207 DEBUG oslo_concurrency.lockutils [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.106 248207 DEBUG oslo_concurrency.lockutils [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.106 248207 DEBUG oslo_concurrency.lockutils [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.106 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.107 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.107 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.107 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.107 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.107 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.107 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.108 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.108 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.108 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.108 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.108 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.109 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.109 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.109 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.109 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.109 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.109 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.110 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.110 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.110 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.110 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.110 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.111 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.111 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.111 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.111 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.111 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.112 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.112 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.112 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.112 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.112 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.113 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.113 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.113 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.113 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.113 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.113 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.114 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.114 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.114 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.114 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.115 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.115 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.115 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.115 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.115 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.115 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.116 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.116 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.116 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.116 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.116 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.117 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.117 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.117 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.117 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.117 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.117 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.118 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.118 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.118 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.118 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.118 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.118 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.119 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.119 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.119 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.119 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.119 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.119 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.120 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.120 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.120 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.120 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.120 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.121 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.121 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.121 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.121 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.121 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.121 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.122 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.122 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.122 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.122 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.122 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.123 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.123 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.123 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.123 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.123 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.123 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.124 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.124 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.124 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.124 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.124 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.124 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.125 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.125 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.125 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.125 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.125 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.125 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.126 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.126 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.126 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.126 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.126 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.127 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.127 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.127 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.127 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.127 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.127 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.128 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.128 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.128 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.128 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.128 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.128 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.129 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.129 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.129 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.129 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.129 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.129 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.130 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.130 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.130 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.130 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.130 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.131 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.131 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.131 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.131 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.131 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.131 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.132 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.132 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.132 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.132 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.132 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.132 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.133 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.133 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.133 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.133 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.133 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.134 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.134 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.134 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.134 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.134 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.134 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.135 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.135 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.135 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.135 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.135 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.136 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.136 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.136 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.136 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.136 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.136 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.137 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.137 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.137 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.137 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.137 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.137 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.138 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.138 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.138 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.138 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.138 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.139 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.139 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.139 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.139 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.139 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.139 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.140 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.140 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.140 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.140 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.140 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.141 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.141 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.141 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.141 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.141 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.141 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.142 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.142 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.142 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.142 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.142 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.142 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.143 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.143 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.143 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.143 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.143 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.144 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.144 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.144 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.144 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.144 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.144 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.145 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.145 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.145 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.145 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.145 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.146 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.146 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.146 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.146 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.146 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.146 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.147 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.147 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.147 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.147 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.147 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.147 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.148 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.148 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.148 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.148 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.148 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.148 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.149 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.149 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.149 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.149 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.149 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.150 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.150 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.150 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.150 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.150 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.150 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.151 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.151 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.151 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.151 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.151 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.152 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.152 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.152 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.152 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.152 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.152 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.153 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.153 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.153 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.153 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.153 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.153 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.154 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.154 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.154 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.154 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.154 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.155 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.155 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.155 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.155 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.155 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.155 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.156 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.156 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.156 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.156 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.156 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.157 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.157 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.157 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.157 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.157 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.157 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.158 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.158 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.158 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.158 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.158 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.158 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.159 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.159 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.159 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.159 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.159 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.159 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.160 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.160 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.160 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.160 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.160 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.161 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.161 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.161 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.161 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.161 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.161 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.162 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.162 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.162 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.162 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.162 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.162 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.163 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.163 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.163 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.163 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.163 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.164 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.164 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.164 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.164 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.164 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.165 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.165 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.165 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.165 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.165 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.165 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.166 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.166 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.166 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.166 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.166 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.167 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.167 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.167 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.167 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.167 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.167 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.168 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.168 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.168 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.168 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.168 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.169 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.169 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.169 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.169 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.169 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.170 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.170 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.170 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.170 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.171 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.171 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.171 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.171 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.171 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.172 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.172 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.172 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.172 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.172 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.172 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.173 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.173 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.173 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.173 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.173 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.173 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.174 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.174 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.174 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.174 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.174 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.174 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.175 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.175 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.175 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.175 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.175 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.176 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.176 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.176 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.176 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.176 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.176 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.177 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.177 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.177 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.177 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.177 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.178 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.178 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.178 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.178 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.178 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.178 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.179 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.179 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.179 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.179 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.179 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.179 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.180 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.180 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.180 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.180 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.181 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.181 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.181 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.181 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.181 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.182 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.182 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.182 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.182 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.182 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.182 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.182 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.183 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.183 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.183 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.183 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.183 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.183 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.183 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.184 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.184 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.184 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.184 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.184 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.184 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.184 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.184 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.185 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.185 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.185 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.185 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.185 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.185 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.185 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.186 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.186 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.186 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.186 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.186 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.186 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.187 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.187 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.187 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.187 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.187 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.187 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.187 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.188 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.188 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.188 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.188 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.188 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.188 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.188 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.188 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.189 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.189 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.189 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.189 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.189 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.189 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.189 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.190 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.190 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.190 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.190 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.190 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.190 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.190 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.191 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.191 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.191 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.191 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.191 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.191 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.191 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.191 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.192 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.192 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.192 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.192 248207 WARNING oslo_config.cfg [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 26 11:52:16 compute-0 nova_compute[248203]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 26 11:52:16 compute-0 nova_compute[248203]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 26 11:52:16 compute-0 nova_compute[248203]: and ``live_migration_inbound_addr`` respectively.
Nov 26 11:52:16 compute-0 nova_compute[248203]: ).  Its value may be silently ignored in the future.
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.192 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.193 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.193 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.193 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.193 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.193 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.193 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.193 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.194 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.194 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.194 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.194 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.194 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.194 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.194 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.195 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.195 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.195 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.195 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.rbd_secret_uuid        = ebab460c-3fd7-5f66-aa87-e10c143123f7 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.195 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.195 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.195 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.196 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.196 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.196 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.196 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.196 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.196 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.196 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.197 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.197 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.197 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.197 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.197 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.197 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.197 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.198 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.198 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.198 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.198 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.198 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.198 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.198 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.199 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.199 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.199 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.199 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.199 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.199 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.199 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.200 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.200 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.200 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.200 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.200 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.200 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.200 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.201 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.201 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.201 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.201 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.201 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.201 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.201 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.201 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.202 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.202 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.202 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.202 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.202 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.202 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.202 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.203 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.203 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.203 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.203 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.203 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.203 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.203 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.204 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.204 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.204 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.204 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.204 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.204 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.205 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.205 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.205 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.205 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.205 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.205 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.206 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.206 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.206 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.206 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.206 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.206 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.206 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.206 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.207 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.207 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.207 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.207 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.207 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.207 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.207 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.208 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.208 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.208 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.208 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.208 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.208 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.208 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.208 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.209 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.209 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.209 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.209 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.209 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.209 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.209 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.210 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.210 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.210 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.210 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.210 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.210 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.210 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.211 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.211 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.211 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.211 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.211 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.211 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.211 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.211 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.212 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.212 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.212 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.212 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.212 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.212 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.213 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.213 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.213 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.213 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.213 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.213 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.214 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.214 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.214 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.214 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.214 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.214 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.214 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.215 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.215 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.215 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.215 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.215 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.215 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.215 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.216 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.216 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.216 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.216 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.216 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.216 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.216 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.216 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.217 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.217 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.217 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.217 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.217 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.217 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.217 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.218 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.218 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.218 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.218 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.218 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.218 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.219 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.219 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.219 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.219 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.219 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.219 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.219 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.219 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.220 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.220 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.220 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.220 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.220 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.220 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.221 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.221 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.221 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.221 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.221 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.221 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.221 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.222 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.222 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.222 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.222 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.222 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.222 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.222 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.222 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.223 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.223 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.223 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.223 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.223 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.223 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.223 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.224 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.224 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.224 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.224 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.224 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.224 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.224 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.224 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.225 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.225 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.225 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.225 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.225 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.225 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.225 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.226 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.226 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.226 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.226 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.226 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.226 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.226 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.226 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.227 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.227 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.227 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.227 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.227 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.227 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.228 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.228 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.228 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.228 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.228 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.228 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.228 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.229 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.229 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.229 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.229 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.229 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.229 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.229 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.230 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.230 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.230 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.230 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.230 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.230 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.230 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.230 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.231 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.231 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.231 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.231 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.231 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.231 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.231 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.232 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.232 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.232 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.232 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.232 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.232 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.232 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.233 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.233 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.233 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.233 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.233 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.233 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.233 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.234 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.234 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.234 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.234 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.234 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.234 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.234 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.235 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.235 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.235 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.235 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.235 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.235 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.235 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.236 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.236 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.236 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.236 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.236 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.236 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.236 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.237 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.237 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.237 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.237 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.237 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.237 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.237 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.237 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.238 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.238 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.238 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.238 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.238 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.238 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.238 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.239 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.239 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.239 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.239 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.239 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.239 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.239 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.240 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.240 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.240 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.240 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.240 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.240 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.240 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.241 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.241 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.241 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.241 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.241 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.241 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.241 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.241 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.242 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.242 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.242 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.242 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.242 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.242 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.242 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.243 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.243 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.243 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.243 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.243 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.243 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.243 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.243 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.244 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.244 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.244 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.244 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.244 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.244 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.244 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.245 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.245 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.245 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.245 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.245 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.245 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.245 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.246 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.246 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.246 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.246 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.246 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.246 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.246 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.246 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.247 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.247 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.247 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.247 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.247 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.247 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.247 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.248 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.248 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.248 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.248 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.248 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.248 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.248 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.249 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.249 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.249 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.249 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.249 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.249 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.249 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.249 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.250 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.250 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.250 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.250 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.250 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.250 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.250 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.251 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] privsep_osbrick.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.251 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.251 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.251 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.251 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.251 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.251 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] nova_sys_admin.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.251 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.252 248207 DEBUG oslo_service.service [None req-b863afe2-d32b-46e6-8ae8-359e6299ef4b - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.253 248207 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.263 248207 DEBUG nova.virt.libvirt.host [None req-7b83ff51-4035-4e48-ae41-937f1113c911 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.264 248207 DEBUG nova.virt.libvirt.host [None req-7b83ff51-4035-4e48-ae41-937f1113c911 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.264 248207 DEBUG nova.virt.libvirt.host [None req-7b83ff51-4035-4e48-ae41-937f1113c911 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.264 248207 DEBUG nova.virt.libvirt.host [None req-7b83ff51-4035-4e48-ae41-937f1113c911 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.282 248207 DEBUG nova.virt.libvirt.host [None req-7b83ff51-4035-4e48-ae41-937f1113c911 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f354a0f1460> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.285 248207 DEBUG nova.virt.libvirt.host [None req-7b83ff51-4035-4e48-ae41-937f1113c911 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f354a0f1460> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.286 248207 INFO nova.virt.libvirt.driver [None req-7b83ff51-4035-4e48-ae41-937f1113c911 - - - - - -] Connection event '1' reason 'None'
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.290 248207 INFO nova.virt.libvirt.host [None req-7b83ff51-4035-4e48-ae41-937f1113c911 - - - - - -] Libvirt host capabilities <capabilities>
Nov 26 11:52:16 compute-0 nova_compute[248203]: 
Nov 26 11:52:16 compute-0 nova_compute[248203]:   <host>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <uuid>99bbe822-1106-4372-ba1d-ab3f2104eabd</uuid>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <cpu>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <arch>x86_64</arch>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model>EPYC-Milan-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <vendor>AMD</vendor>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <microcode version='167776725'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <signature family='25' model='1' stepping='1'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <topology sockets='4' dies='1' clusters='1' cores='1' threads='1'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <maxphysaddr mode='emulate' bits='48'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature name='x2apic'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature name='tsc-deadline'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature name='osxsave'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature name='hypervisor'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature name='tsc_adjust'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature name='ospke'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature name='vaes'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature name='vpclmulqdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature name='spec-ctrl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature name='stibp'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature name='arch-capabilities'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature name='ssbd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature name='cmp_legacy'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature name='virt-ssbd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature name='lbrv'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature name='tsc-scale'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature name='vmcb-clean'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature name='pause-filter'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature name='pfthreshold'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature name='v-vmsave-vmload'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature name='vgif'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature name='rdctl-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature name='skip-l1dfl-vmentry'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature name='mds-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature name='pschange-mc-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <pages unit='KiB' size='4'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <pages unit='KiB' size='2048'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <pages unit='KiB' size='1048576'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </cpu>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <power_management>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <suspend_mem/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </power_management>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <iommu support='no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <migration_features>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <live/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <uri_transports>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <uri_transport>tcp</uri_transport>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <uri_transport>rdma</uri_transport>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </uri_transports>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </migration_features>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <topology>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <cells num='1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <cell id='0'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:           <memory unit='KiB'>7865360</memory>
Nov 26 11:52:16 compute-0 nova_compute[248203]:           <pages unit='KiB' size='4'>1966340</pages>
Nov 26 11:52:16 compute-0 nova_compute[248203]:           <pages unit='KiB' size='2048'>0</pages>
Nov 26 11:52:16 compute-0 nova_compute[248203]:           <pages unit='KiB' size='1048576'>0</pages>
Nov 26 11:52:16 compute-0 nova_compute[248203]:           <distances>
Nov 26 11:52:16 compute-0 nova_compute[248203]:             <sibling id='0' value='10'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:           </distances>
Nov 26 11:52:16 compute-0 nova_compute[248203]:           <cpus num='4'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:           </cpus>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         </cell>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </cells>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </topology>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <cache>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </cache>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <secmodel>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model>selinux</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <doi>0</doi>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </secmodel>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <secmodel>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model>dac</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <doi>0</doi>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <baselabel type='kvm'>+107:+107</baselabel>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <baselabel type='qemu'>+107:+107</baselabel>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </secmodel>
Nov 26 11:52:16 compute-0 nova_compute[248203]:   </host>
Nov 26 11:52:16 compute-0 nova_compute[248203]: 
Nov 26 11:52:16 compute-0 nova_compute[248203]:   <guest>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <os_type>hvm</os_type>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <arch name='i686'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <wordsize>32</wordsize>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <domain type='qemu'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <domain type='kvm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </arch>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <features>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <pae/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <nonpae/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <acpi default='on' toggle='yes'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <apic default='on' toggle='no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <cpuselection/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <deviceboot/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <disksnapshot default='on' toggle='no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <externalSnapshot/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </features>
Nov 26 11:52:16 compute-0 nova_compute[248203]:   </guest>
Nov 26 11:52:16 compute-0 nova_compute[248203]: 
Nov 26 11:52:16 compute-0 nova_compute[248203]:   <guest>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <os_type>hvm</os_type>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <arch name='x86_64'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <wordsize>64</wordsize>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <domain type='qemu'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <domain type='kvm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </arch>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <features>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <acpi default='on' toggle='yes'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <apic default='on' toggle='no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <cpuselection/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <deviceboot/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <disksnapshot default='on' toggle='no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <externalSnapshot/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </features>
Nov 26 11:52:16 compute-0 nova_compute[248203]:   </guest>
Nov 26 11:52:16 compute-0 nova_compute[248203]: 
Nov 26 11:52:16 compute-0 nova_compute[248203]: </capabilities>
Nov 26 11:52:16 compute-0 nova_compute[248203]: 
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.298 248207 WARNING nova.virt.libvirt.driver [None req-7b83ff51-4035-4e48-ae41-937f1113c911 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.299 248207 DEBUG nova.virt.libvirt.volume.mount [None req-7b83ff51-4035-4e48-ae41-937f1113c911 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.304 248207 DEBUG nova.virt.libvirt.host [None req-7b83ff51-4035-4e48-ae41-937f1113c911 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.322 248207 DEBUG nova.virt.libvirt.host [None req-7b83ff51-4035-4e48-ae41-937f1113c911 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 26 11:52:16 compute-0 nova_compute[248203]: <domainCapabilities>
Nov 26 11:52:16 compute-0 nova_compute[248203]:   <path>/usr/libexec/qemu-kvm</path>
Nov 26 11:52:16 compute-0 nova_compute[248203]:   <domain>kvm</domain>
Nov 26 11:52:16 compute-0 nova_compute[248203]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 26 11:52:16 compute-0 nova_compute[248203]:   <arch>i686</arch>
Nov 26 11:52:16 compute-0 nova_compute[248203]:   <vcpu max='4096'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:   <iothreads supported='yes'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:   <os supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <enum name='firmware'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <loader supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='type'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>rom</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>pflash</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='readonly'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>yes</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>no</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='secure'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>no</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </loader>
Nov 26 11:52:16 compute-0 nova_compute[248203]:   </os>
Nov 26 11:52:16 compute-0 nova_compute[248203]:   <cpu>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <mode name='host-passthrough' supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='hostPassthroughMigratable'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>on</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>off</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </mode>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <mode name='maximum' supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='maximumMigratable'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>on</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>off</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </mode>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <mode name='host-model' supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model fallback='forbid'>EPYC-Milan</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <vendor>AMD</vendor>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <maxphysaddr mode='passthrough' limit='48'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='x2apic'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='tsc-deadline'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='hypervisor'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='tsc_adjust'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='vaes'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='vpclmulqdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='spec-ctrl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='stibp'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='ssbd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='cmp_legacy'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='overflow-recov'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='succor'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='virt-ssbd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='lbrv'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='tsc-scale'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='vmcb-clean'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='flushbyasid'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='pause-filter'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='pfthreshold'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='v-vmsave-vmload'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='vgif'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </mode>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <mode name='custom' supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Broadwell'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Broadwell-IBRS'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Broadwell-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Broadwell-v3'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Cascadelake-Server'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Cascadelake-Server-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Cascadelake-Server-v2'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Cascadelake-Server-v3'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Cascadelake-Server-v4'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Cascadelake-Server-v5'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Cooperlake'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='taa-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Cooperlake-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='taa-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Cooperlake-v2'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='taa-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Denverton'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='mpx'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Denverton-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='mpx'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel'>Denverton-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel'>Denverton-v3</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='EPYC-Genoa'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amd-psfd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='auto-ibrs'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512ifma'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='no-nested-data-bp'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='null-sel-clr-base'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='stibp-always-on'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='EPYC-Genoa-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amd-psfd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='auto-ibrs'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512ifma'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='no-nested-data-bp'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='null-sel-clr-base'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='stibp-always-on'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='EPYC-Milan-v2'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amd-psfd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='no-nested-data-bp'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='null-sel-clr-base'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='stibp-always-on'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='AMD'>EPYC-v3</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='AMD'>EPYC-v4</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='GraniteRapids'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-fp16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-int8'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-tile'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx-vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-fp16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512ifma'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='bus-lock-detect'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fbsdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fsrc'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fsrs'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fzrm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='mcdt-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='pbrsb-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='prefetchiti'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='psdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='sbdr-ssdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='serialize'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='taa-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='tsx-ldtrk'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='xfd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='GraniteRapids-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-fp16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-int8'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-tile'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx-vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-fp16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512ifma'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='bus-lock-detect'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fbsdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fsrc'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fsrs'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fzrm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='mcdt-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='pbrsb-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='prefetchiti'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='psdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='sbdr-ssdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='serialize'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='taa-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='tsx-ldtrk'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='xfd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='GraniteRapids-v2'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-fp16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-int8'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-tile'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx-vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx10'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx10-128'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx10-256'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx10-512'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-fp16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512ifma'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='bus-lock-detect'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='cldemote'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fbsdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fsrc'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fsrs'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fzrm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='mcdt-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='movdir64b'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='movdiri'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='pbrsb-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='prefetchiti'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='psdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='sbdr-ssdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='serialize'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ss'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='taa-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='tsx-ldtrk'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='xfd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Haswell'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Haswell-IBRS'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Haswell-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel'>Haswell-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Haswell-v3'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel'>Haswell-v4</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Icelake-Server'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Icelake-Server-noTSX'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Icelake-Server-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Icelake-Server-v2'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Icelake-Server-v3'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='taa-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Icelake-Server-v4'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512ifma'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='taa-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Icelake-Server-v5'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512ifma'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='taa-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Icelake-Server-v6'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512ifma'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='taa-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Icelake-Server-v7'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512ifma'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='taa-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='KnightsMill'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-4fmaps'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-4vnniw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512er'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512pf'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ss'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='KnightsMill-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-4fmaps'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-4vnniw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512er'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512pf'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ss'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Opteron_G4'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fma4'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='xop'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Opteron_G4-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fma4'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='xop'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Opteron_G5'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fma4'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='tbm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='xop'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Opteron_G5-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fma4'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='tbm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='xop'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='SapphireRapids'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-int8'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-tile'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx-vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-fp16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512ifma'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='bus-lock-detect'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fsrc'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fsrs'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fzrm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='serialize'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='taa-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='tsx-ldtrk'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='xfd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='SapphireRapids-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-int8'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-tile'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx-vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-fp16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512ifma'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='bus-lock-detect'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fsrc'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fsrs'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fzrm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='serialize'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='taa-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='tsx-ldtrk'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='xfd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='SapphireRapids-v2'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-int8'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-tile'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx-vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-fp16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512ifma'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='bus-lock-detect'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fbsdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fsrc'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fsrs'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fzrm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='psdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='sbdr-ssdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='serialize'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='taa-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='tsx-ldtrk'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='xfd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='SapphireRapids-v3'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-int8'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-tile'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx-vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-fp16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512ifma'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='bus-lock-detect'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='cldemote'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fbsdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fsrc'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fsrs'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fzrm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='movdir64b'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='movdiri'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='psdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='sbdr-ssdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='serialize'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ss'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='taa-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='tsx-ldtrk'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='xfd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='SierraForest'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx-ifma'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx-ne-convert'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx-vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx-vnni-int8'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='bus-lock-detect'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='cmpccxadd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fbsdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fsrs'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='mcdt-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='pbrsb-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='psdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='sbdr-ssdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='serialize'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='SierraForest-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx-ifma'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx-ne-convert'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx-vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx-vnni-int8'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='bus-lock-detect'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='cmpccxadd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fbsdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fsrs'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='mcdt-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='pbrsb-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='psdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='sbdr-ssdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='serialize'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Skylake-Client'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Skylake-Client-IBRS'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Skylake-Client-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Skylake-Client-v2'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Skylake-Server'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Skylake-Server-IBRS'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Skylake-Server-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Skylake-Server-v2'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Skylake-Server-v3'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Skylake-Server-v4'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Skylake-Server-v5'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Snowridge'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='cldemote'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='core-capability'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='movdir64b'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='movdiri'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='mpx'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='split-lock-detect'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Snowridge-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='cldemote'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='core-capability'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='movdir64b'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='movdiri'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='mpx'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='split-lock-detect'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Snowridge-v2'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='cldemote'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='core-capability'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='movdir64b'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='movdiri'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='split-lock-detect'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Snowridge-v3'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='cldemote'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='core-capability'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='movdir64b'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='movdiri'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='split-lock-detect'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Snowridge-v4'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='cldemote'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='movdir64b'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='movdiri'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='athlon'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='3dnow'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='3dnowext'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='athlon-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='3dnow'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='3dnowext'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='core2duo'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ss'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='core2duo-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ss'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='coreduo'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ss'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='coreduo-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ss'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='n270'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ss'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='n270-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ss'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='phenom'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='3dnow'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='3dnowext'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='phenom-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='3dnow'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='3dnowext'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </mode>
Nov 26 11:52:16 compute-0 nova_compute[248203]:   </cpu>
Nov 26 11:52:16 compute-0 nova_compute[248203]:   <memoryBacking supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <enum name='sourceType'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <value>file</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <value>anonymous</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <value>memfd</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:   </memoryBacking>
Nov 26 11:52:16 compute-0 nova_compute[248203]:   <devices>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <disk supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='diskDevice'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>disk</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>cdrom</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>floppy</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>lun</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='bus'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>fdc</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>scsi</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>virtio</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>usb</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>sata</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='model'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>virtio</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>virtio-transitional</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>virtio-non-transitional</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </disk>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <graphics supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='type'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>vnc</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>egl-headless</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>dbus</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </graphics>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <video supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='modelType'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>vga</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>cirrus</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>virtio</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>none</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>bochs</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>ramfb</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </video>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <hostdev supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='mode'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>subsystem</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='startupPolicy'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>default</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>mandatory</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>requisite</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>optional</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='subsysType'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>usb</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>pci</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>scsi</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='capsType'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='pciBackend'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </hostdev>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <rng supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='model'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>virtio</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>virtio-transitional</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>virtio-non-transitional</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='backendModel'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>random</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>egd</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>builtin</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </rng>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <filesystem supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='driverType'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>path</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>handle</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>virtiofs</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </filesystem>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <tpm supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='model'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>tpm-tis</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>tpm-crb</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='backendModel'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>emulator</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>external</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='backendVersion'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>2.0</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </tpm>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <redirdev supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='bus'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>usb</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </redirdev>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <channel supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='type'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>pty</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>unix</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </channel>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <crypto supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='model'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='type'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>qemu</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='backendModel'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>builtin</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </crypto>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <interface supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='backendType'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>default</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>passt</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </interface>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <panic supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='model'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>isa</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>hyperv</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </panic>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <console supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='type'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>null</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>vc</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>pty</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>dev</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>file</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>pipe</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>stdio</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>udp</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>tcp</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>unix</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>qemu-vdagent</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>dbus</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </console>
Nov 26 11:52:16 compute-0 nova_compute[248203]:   </devices>
Nov 26 11:52:16 compute-0 nova_compute[248203]:   <features>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <gic supported='no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <vmcoreinfo supported='yes'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <genid supported='yes'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <backingStoreInput supported='yes'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <backup supported='yes'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <async-teardown supported='yes'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <ps2 supported='yes'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <sev supported='no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <sgx supported='no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <hyperv supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='features'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>relaxed</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>vapic</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>spinlocks</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>vpindex</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>runtime</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>synic</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>stimer</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>reset</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>vendor_id</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>frequencies</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>reenlightenment</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>tlbflush</value>
Nov 26 11:52:16 compute-0 podman[248570]: 2025-11-26 11:52:16.355770284 +0000 UTC m=+0.065067482 container health_status cfbeb1268b73bea59211ec8c025ee3818e11127880f6c0675b5fbea39bdd577e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>ipi</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>avic</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>emsr_bitmap</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>xmm_input</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <defaults>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <spinlocks>4095</spinlocks>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <stimer_direct>on</stimer_direct>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <tlbflush_direct>on</tlbflush_direct>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <tlbflush_extended>on</tlbflush_extended>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </defaults>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </hyperv>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <launchSecurity supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='sectype'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>tdx</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </launchSecurity>
Nov 26 11:52:16 compute-0 nova_compute[248203]:   </features>
Nov 26 11:52:16 compute-0 nova_compute[248203]: </domainCapabilities>
Nov 26 11:52:16 compute-0 nova_compute[248203]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.325 248207 DEBUG nova.virt.libvirt.host [None req-7b83ff51-4035-4e48-ae41-937f1113c911 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 26 11:52:16 compute-0 nova_compute[248203]: <domainCapabilities>
Nov 26 11:52:16 compute-0 nova_compute[248203]:   <path>/usr/libexec/qemu-kvm</path>
Nov 26 11:52:16 compute-0 nova_compute[248203]:   <domain>kvm</domain>
Nov 26 11:52:16 compute-0 nova_compute[248203]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 26 11:52:16 compute-0 nova_compute[248203]:   <arch>i686</arch>
Nov 26 11:52:16 compute-0 nova_compute[248203]:   <vcpu max='240'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:   <iothreads supported='yes'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:   <os supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <enum name='firmware'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <loader supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='type'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>rom</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>pflash</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='readonly'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>yes</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>no</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='secure'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>no</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </loader>
Nov 26 11:52:16 compute-0 nova_compute[248203]:   </os>
Nov 26 11:52:16 compute-0 nova_compute[248203]:   <cpu>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <mode name='host-passthrough' supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='hostPassthroughMigratable'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>on</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>off</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </mode>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <mode name='maximum' supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='maximumMigratable'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>on</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>off</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </mode>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <mode name='host-model' supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model fallback='forbid'>EPYC-Milan</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <vendor>AMD</vendor>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <maxphysaddr mode='passthrough' limit='48'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='x2apic'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='tsc-deadline'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='hypervisor'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='tsc_adjust'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='vaes'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='vpclmulqdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='spec-ctrl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='stibp'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='ssbd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='cmp_legacy'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='overflow-recov'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='succor'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='virt-ssbd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='lbrv'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='tsc-scale'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='vmcb-clean'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='flushbyasid'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='pause-filter'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='pfthreshold'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='v-vmsave-vmload'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='vgif'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </mode>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <mode name='custom' supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Broadwell'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Broadwell-IBRS'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Broadwell-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Broadwell-v3'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Cascadelake-Server'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Cascadelake-Server-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Cascadelake-Server-v2'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Cascadelake-Server-v3'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Cascadelake-Server-v4'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Cascadelake-Server-v5'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Cooperlake'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='taa-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Cooperlake-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='taa-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Cooperlake-v2'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='taa-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Denverton'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='mpx'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Denverton-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='mpx'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel'>Denverton-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel'>Denverton-v3</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='EPYC-Genoa'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amd-psfd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='auto-ibrs'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512ifma'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='no-nested-data-bp'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='null-sel-clr-base'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='stibp-always-on'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='EPYC-Genoa-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amd-psfd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='auto-ibrs'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512ifma'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='no-nested-data-bp'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='null-sel-clr-base'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='stibp-always-on'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='EPYC-Milan-v2'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amd-psfd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='no-nested-data-bp'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='null-sel-clr-base'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='stibp-always-on'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='AMD'>EPYC-v3</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='AMD'>EPYC-v4</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='GraniteRapids'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-fp16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-int8'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-tile'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx-vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-fp16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512ifma'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='bus-lock-detect'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fbsdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fsrc'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fsrs'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fzrm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='mcdt-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='pbrsb-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='prefetchiti'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='psdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='sbdr-ssdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='serialize'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='taa-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='tsx-ldtrk'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='xfd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='GraniteRapids-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-fp16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-int8'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-tile'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx-vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-fp16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512ifma'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='bus-lock-detect'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fbsdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fsrc'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fsrs'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fzrm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='mcdt-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='pbrsb-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='prefetchiti'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='psdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='sbdr-ssdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='serialize'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='taa-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='tsx-ldtrk'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='xfd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='GraniteRapids-v2'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-fp16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-int8'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-tile'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx-vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx10'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx10-128'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx10-256'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx10-512'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-fp16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512ifma'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='bus-lock-detect'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='cldemote'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fbsdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fsrc'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fsrs'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fzrm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='mcdt-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='movdir64b'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='movdiri'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='pbrsb-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='prefetchiti'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='psdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='sbdr-ssdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='serialize'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ss'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='taa-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='tsx-ldtrk'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='xfd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Haswell'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Haswell-IBRS'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Haswell-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel'>Haswell-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Haswell-v3'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel'>Haswell-v4</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Icelake-Server'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Icelake-Server-noTSX'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Icelake-Server-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Icelake-Server-v2'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Icelake-Server-v3'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='taa-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Icelake-Server-v4'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512ifma'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='taa-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Icelake-Server-v5'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512ifma'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='taa-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Icelake-Server-v6'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512ifma'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='taa-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Icelake-Server-v7'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512ifma'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='taa-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='KnightsMill'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-4fmaps'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-4vnniw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512er'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512pf'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ss'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='KnightsMill-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-4fmaps'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-4vnniw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512er'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512pf'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ss'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Opteron_G4'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fma4'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='xop'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Opteron_G4-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fma4'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='xop'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Opteron_G5'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fma4'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='tbm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='xop'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Opteron_G5-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fma4'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='tbm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='xop'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='SapphireRapids'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-int8'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-tile'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx-vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-fp16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512ifma'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='bus-lock-detect'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fsrc'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fsrs'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fzrm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='serialize'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='taa-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='tsx-ldtrk'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='xfd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='SapphireRapids-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-int8'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-tile'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx-vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-fp16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512ifma'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='bus-lock-detect'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fsrc'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fsrs'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fzrm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='serialize'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='taa-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='tsx-ldtrk'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='xfd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='SapphireRapids-v2'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-int8'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-tile'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx-vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-fp16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512ifma'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='bus-lock-detect'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fbsdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fsrc'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fsrs'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fzrm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='psdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='sbdr-ssdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='serialize'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='taa-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='tsx-ldtrk'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='xfd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='SapphireRapids-v3'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-int8'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-tile'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx-vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-fp16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512ifma'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='bus-lock-detect'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='cldemote'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fbsdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fsrc'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fsrs'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fzrm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='movdir64b'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='movdiri'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='psdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='sbdr-ssdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='serialize'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ss'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='taa-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='tsx-ldtrk'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='xfd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='SierraForest'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx-ifma'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx-ne-convert'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx-vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx-vnni-int8'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='bus-lock-detect'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='cmpccxadd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fbsdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fsrs'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='mcdt-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='pbrsb-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='psdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='sbdr-ssdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='serialize'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='SierraForest-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx-ifma'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx-ne-convert'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx-vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx-vnni-int8'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='bus-lock-detect'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='cmpccxadd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fbsdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fsrs'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='mcdt-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='pbrsb-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='psdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='sbdr-ssdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='serialize'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Skylake-Client'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Skylake-Client-IBRS'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Skylake-Client-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Skylake-Client-v2'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Skylake-Server'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Skylake-Server-IBRS'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Skylake-Server-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Skylake-Server-v2'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Skylake-Server-v3'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Skylake-Server-v4'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Skylake-Server-v5'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Snowridge'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='cldemote'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='core-capability'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='movdir64b'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='movdiri'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='mpx'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='split-lock-detect'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Snowridge-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='cldemote'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='core-capability'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='movdir64b'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='movdiri'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='mpx'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='split-lock-detect'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Snowridge-v2'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='cldemote'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='core-capability'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='movdir64b'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='movdiri'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='split-lock-detect'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Snowridge-v3'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='cldemote'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='core-capability'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='movdir64b'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='movdiri'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='split-lock-detect'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Snowridge-v4'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='cldemote'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='movdir64b'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='movdiri'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='athlon'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='3dnow'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='3dnowext'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='athlon-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='3dnow'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='3dnowext'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='core2duo'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ss'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='core2duo-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ss'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='coreduo'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ss'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='coreduo-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ss'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='n270'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ss'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='n270-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ss'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='phenom'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='3dnow'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='3dnowext'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='phenom-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='3dnow'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='3dnowext'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </mode>
Nov 26 11:52:16 compute-0 nova_compute[248203]:   </cpu>
Nov 26 11:52:16 compute-0 nova_compute[248203]:   <memoryBacking supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <enum name='sourceType'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <value>file</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <value>anonymous</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <value>memfd</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:   </memoryBacking>
Nov 26 11:52:16 compute-0 nova_compute[248203]:   <devices>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <disk supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='diskDevice'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>disk</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>cdrom</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>floppy</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>lun</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='bus'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>ide</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>fdc</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>scsi</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>virtio</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>usb</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>sata</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='model'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>virtio</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>virtio-transitional</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>virtio-non-transitional</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </disk>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <graphics supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='type'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>vnc</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>egl-headless</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>dbus</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </graphics>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <video supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='modelType'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>vga</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>cirrus</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>virtio</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>none</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>bochs</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>ramfb</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </video>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <hostdev supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='mode'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>subsystem</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='startupPolicy'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>default</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>mandatory</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>requisite</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>optional</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='subsysType'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>usb</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>pci</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>scsi</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='capsType'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='pciBackend'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </hostdev>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <rng supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='model'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>virtio</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>virtio-transitional</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>virtio-non-transitional</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='backendModel'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>random</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>egd</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>builtin</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </rng>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <filesystem supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='driverType'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>path</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>handle</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>virtiofs</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </filesystem>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <tpm supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='model'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>tpm-tis</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>tpm-crb</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='backendModel'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>emulator</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>external</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='backendVersion'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>2.0</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </tpm>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <redirdev supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='bus'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>usb</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </redirdev>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <channel supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='type'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>pty</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>unix</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </channel>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <crypto supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='model'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='type'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>qemu</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='backendModel'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>builtin</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </crypto>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <interface supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='backendType'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>default</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>passt</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </interface>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <panic supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='model'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>isa</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>hyperv</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </panic>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <console supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='type'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>null</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>vc</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>pty</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>dev</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>file</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>pipe</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>stdio</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>udp</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>tcp</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>unix</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>qemu-vdagent</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>dbus</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </console>
Nov 26 11:52:16 compute-0 nova_compute[248203]:   </devices>
Nov 26 11:52:16 compute-0 nova_compute[248203]:   <features>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <gic supported='no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <vmcoreinfo supported='yes'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <genid supported='yes'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <backingStoreInput supported='yes'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <backup supported='yes'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <async-teardown supported='yes'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <ps2 supported='yes'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <sev supported='no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <sgx supported='no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <hyperv supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='features'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>relaxed</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>vapic</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>spinlocks</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>vpindex</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>runtime</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>synic</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>stimer</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>reset</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>vendor_id</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>frequencies</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>reenlightenment</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>tlbflush</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>ipi</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>avic</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>emsr_bitmap</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>xmm_input</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <defaults>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <spinlocks>4095</spinlocks>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <stimer_direct>on</stimer_direct>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <tlbflush_direct>on</tlbflush_direct>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <tlbflush_extended>on</tlbflush_extended>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </defaults>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </hyperv>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <launchSecurity supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='sectype'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>tdx</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </launchSecurity>
Nov 26 11:52:16 compute-0 nova_compute[248203]:   </features>
Nov 26 11:52:16 compute-0 nova_compute[248203]: </domainCapabilities>
Nov 26 11:52:16 compute-0 nova_compute[248203]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.329 248207 DEBUG nova.virt.libvirt.host [None req-7b83ff51-4035-4e48-ae41-937f1113c911 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.336 248207 DEBUG nova.virt.libvirt.host [None req-7b83ff51-4035-4e48-ae41-937f1113c911 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 26 11:52:16 compute-0 nova_compute[248203]: <domainCapabilities>
Nov 26 11:52:16 compute-0 nova_compute[248203]:   <path>/usr/libexec/qemu-kvm</path>
Nov 26 11:52:16 compute-0 nova_compute[248203]:   <domain>kvm</domain>
Nov 26 11:52:16 compute-0 nova_compute[248203]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 26 11:52:16 compute-0 nova_compute[248203]:   <arch>x86_64</arch>
Nov 26 11:52:16 compute-0 nova_compute[248203]:   <vcpu max='4096'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:   <iothreads supported='yes'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:   <os supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <enum name='firmware'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <value>efi</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <loader supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='type'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>rom</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>pflash</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='readonly'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>yes</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>no</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='secure'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>yes</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>no</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </loader>
Nov 26 11:52:16 compute-0 nova_compute[248203]:   </os>
Nov 26 11:52:16 compute-0 nova_compute[248203]:   <cpu>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <mode name='host-passthrough' supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='hostPassthroughMigratable'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>on</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>off</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </mode>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <mode name='maximum' supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='maximumMigratable'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>on</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>off</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </mode>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <mode name='host-model' supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model fallback='forbid'>EPYC-Milan</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <vendor>AMD</vendor>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <maxphysaddr mode='passthrough' limit='48'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='x2apic'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='tsc-deadline'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='hypervisor'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='tsc_adjust'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='vaes'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='vpclmulqdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='spec-ctrl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='stibp'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='ssbd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='cmp_legacy'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='overflow-recov'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='succor'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='virt-ssbd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='lbrv'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='tsc-scale'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='vmcb-clean'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='flushbyasid'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='pause-filter'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='pfthreshold'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='v-vmsave-vmload'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='vgif'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </mode>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <mode name='custom' supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Broadwell'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Broadwell-IBRS'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Broadwell-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Broadwell-v3'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Cascadelake-Server'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Cascadelake-Server-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Cascadelake-Server-v2'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Cascadelake-Server-v3'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Cascadelake-Server-v4'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Cascadelake-Server-v5'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Cooperlake'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='taa-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Cooperlake-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='taa-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Cooperlake-v2'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='taa-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Denverton'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='mpx'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Denverton-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='mpx'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel'>Denverton-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel'>Denverton-v3</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='EPYC-Genoa'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amd-psfd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='auto-ibrs'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512ifma'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='no-nested-data-bp'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='null-sel-clr-base'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='stibp-always-on'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='EPYC-Genoa-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amd-psfd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='auto-ibrs'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512ifma'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='no-nested-data-bp'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='null-sel-clr-base'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='stibp-always-on'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='EPYC-Milan-v2'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amd-psfd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='no-nested-data-bp'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='null-sel-clr-base'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='stibp-always-on'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='AMD'>EPYC-v3</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='AMD'>EPYC-v4</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='GraniteRapids'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-fp16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-int8'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-tile'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx-vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-fp16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512ifma'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='bus-lock-detect'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fbsdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fsrc'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fsrs'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fzrm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='mcdt-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='pbrsb-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='prefetchiti'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='psdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='sbdr-ssdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='serialize'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='taa-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='tsx-ldtrk'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='xfd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='GraniteRapids-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-fp16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-int8'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-tile'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx-vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-fp16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512ifma'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='bus-lock-detect'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fbsdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fsrc'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fsrs'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fzrm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='mcdt-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='pbrsb-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='prefetchiti'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='psdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='sbdr-ssdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='serialize'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='taa-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='tsx-ldtrk'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='xfd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='GraniteRapids-v2'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-fp16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-int8'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-tile'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx-vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx10'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx10-128'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx10-256'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx10-512'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-fp16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512ifma'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='bus-lock-detect'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='cldemote'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fbsdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fsrc'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fsrs'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fzrm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='mcdt-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='movdir64b'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='movdiri'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='pbrsb-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='prefetchiti'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='psdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='sbdr-ssdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='serialize'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ss'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='taa-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='tsx-ldtrk'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='xfd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Haswell'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Haswell-IBRS'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Haswell-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel'>Haswell-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Haswell-v3'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel'>Haswell-v4</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Icelake-Server'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Icelake-Server-noTSX'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Icelake-Server-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Icelake-Server-v2'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Icelake-Server-v3'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='taa-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Icelake-Server-v4'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512ifma'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='taa-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Icelake-Server-v5'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512ifma'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='taa-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Icelake-Server-v6'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512ifma'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='taa-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Icelake-Server-v7'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512ifma'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='taa-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='KnightsMill'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-4fmaps'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-4vnniw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512er'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512pf'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ss'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='KnightsMill-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-4fmaps'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-4vnniw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512er'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512pf'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ss'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Opteron_G4'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fma4'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='xop'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Opteron_G4-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fma4'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='xop'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Opteron_G5'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fma4'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='tbm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='xop'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Opteron_G5-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fma4'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='tbm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='xop'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='SapphireRapids'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-int8'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-tile'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx-vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-fp16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512ifma'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='bus-lock-detect'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fsrc'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fsrs'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fzrm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='serialize'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='taa-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='tsx-ldtrk'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='xfd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='SapphireRapids-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-int8'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-tile'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx-vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-fp16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512ifma'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='bus-lock-detect'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fsrc'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fsrs'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fzrm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='serialize'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='taa-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='tsx-ldtrk'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='xfd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='SapphireRapids-v2'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-int8'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-tile'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx-vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-fp16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512ifma'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='bus-lock-detect'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fbsdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fsrc'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fsrs'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fzrm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='psdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='sbdr-ssdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='serialize'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='taa-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='tsx-ldtrk'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='xfd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='SapphireRapids-v3'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-int8'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-tile'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx-vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-fp16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512ifma'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='bus-lock-detect'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='cldemote'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fbsdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fsrc'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fsrs'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fzrm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='movdir64b'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='movdiri'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='psdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='sbdr-ssdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='serialize'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ss'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='taa-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='tsx-ldtrk'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='xfd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='SierraForest'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx-ifma'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx-ne-convert'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx-vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx-vnni-int8'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='bus-lock-detect'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='cmpccxadd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fbsdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fsrs'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='mcdt-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='pbrsb-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='psdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='sbdr-ssdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='serialize'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='SierraForest-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx-ifma'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx-ne-convert'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx-vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx-vnni-int8'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='bus-lock-detect'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='cmpccxadd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fbsdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fsrs'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='mcdt-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='pbrsb-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='psdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='sbdr-ssdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='serialize'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Skylake-Client'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Skylake-Client-IBRS'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Skylake-Client-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Skylake-Client-v2'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Skylake-Server'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Skylake-Server-IBRS'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Skylake-Server-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Skylake-Server-v2'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Skylake-Server-v3'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Skylake-Server-v4'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Skylake-Server-v5'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Snowridge'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='cldemote'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='core-capability'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='movdir64b'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='movdiri'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='mpx'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='split-lock-detect'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Snowridge-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='cldemote'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='core-capability'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='movdir64b'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='movdiri'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='mpx'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='split-lock-detect'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Snowridge-v2'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='cldemote'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='core-capability'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='movdir64b'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='movdiri'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='split-lock-detect'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Snowridge-v3'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='cldemote'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='core-capability'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='movdir64b'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='movdiri'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='split-lock-detect'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Snowridge-v4'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='cldemote'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='movdir64b'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='movdiri'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='athlon'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='3dnow'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='3dnowext'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='athlon-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='3dnow'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='3dnowext'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='core2duo'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ss'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='core2duo-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ss'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='coreduo'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ss'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='coreduo-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ss'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='n270'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ss'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='n270-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ss'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='phenom'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='3dnow'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='3dnowext'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='phenom-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='3dnow'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='3dnowext'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </mode>
Nov 26 11:52:16 compute-0 nova_compute[248203]:   </cpu>
Nov 26 11:52:16 compute-0 nova_compute[248203]:   <memoryBacking supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <enum name='sourceType'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <value>file</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <value>anonymous</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <value>memfd</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:   </memoryBacking>
Nov 26 11:52:16 compute-0 nova_compute[248203]:   <devices>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <disk supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='diskDevice'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>disk</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>cdrom</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>floppy</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>lun</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='bus'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>fdc</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>scsi</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>virtio</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>usb</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>sata</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='model'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>virtio</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>virtio-transitional</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>virtio-non-transitional</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </disk>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <graphics supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='type'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>vnc</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>egl-headless</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>dbus</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </graphics>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <video supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='modelType'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>vga</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>cirrus</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>virtio</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>none</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>bochs</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>ramfb</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </video>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <hostdev supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='mode'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>subsystem</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='startupPolicy'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>default</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>mandatory</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>requisite</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>optional</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='subsysType'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>usb</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>pci</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>scsi</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='capsType'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='pciBackend'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </hostdev>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <rng supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='model'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>virtio</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>virtio-transitional</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>virtio-non-transitional</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='backendModel'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>random</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>egd</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>builtin</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </rng>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <filesystem supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='driverType'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>path</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>handle</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>virtiofs</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </filesystem>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <tpm supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='model'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>tpm-tis</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>tpm-crb</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='backendModel'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>emulator</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>external</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='backendVersion'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>2.0</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </tpm>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <redirdev supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='bus'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>usb</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </redirdev>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <channel supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='type'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>pty</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>unix</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </channel>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <crypto supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='model'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='type'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>qemu</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='backendModel'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>builtin</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </crypto>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <interface supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='backendType'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>default</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>passt</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </interface>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <panic supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='model'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>isa</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>hyperv</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </panic>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <console supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='type'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>null</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>vc</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>pty</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>dev</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>file</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>pipe</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>stdio</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>udp</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>tcp</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>unix</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>qemu-vdagent</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>dbus</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </console>
Nov 26 11:52:16 compute-0 nova_compute[248203]:   </devices>
Nov 26 11:52:16 compute-0 nova_compute[248203]:   <features>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <gic supported='no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <vmcoreinfo supported='yes'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <genid supported='yes'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <backingStoreInput supported='yes'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <backup supported='yes'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <async-teardown supported='yes'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <ps2 supported='yes'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <sev supported='no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <sgx supported='no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <hyperv supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='features'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>relaxed</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>vapic</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>spinlocks</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>vpindex</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>runtime</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>synic</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>stimer</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>reset</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>vendor_id</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>frequencies</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>reenlightenment</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>tlbflush</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>ipi</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>avic</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>emsr_bitmap</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>xmm_input</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <defaults>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <spinlocks>4095</spinlocks>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <stimer_direct>on</stimer_direct>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <tlbflush_direct>on</tlbflush_direct>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <tlbflush_extended>on</tlbflush_extended>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </defaults>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </hyperv>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <launchSecurity supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='sectype'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>tdx</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </launchSecurity>
Nov 26 11:52:16 compute-0 nova_compute[248203]:   </features>
Nov 26 11:52:16 compute-0 nova_compute[248203]: </domainCapabilities>
Nov 26 11:52:16 compute-0 nova_compute[248203]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.392 248207 DEBUG nova.virt.libvirt.host [None req-7b83ff51-4035-4e48-ae41-937f1113c911 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 26 11:52:16 compute-0 nova_compute[248203]: <domainCapabilities>
Nov 26 11:52:16 compute-0 nova_compute[248203]:   <path>/usr/libexec/qemu-kvm</path>
Nov 26 11:52:16 compute-0 nova_compute[248203]:   <domain>kvm</domain>
Nov 26 11:52:16 compute-0 nova_compute[248203]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 26 11:52:16 compute-0 nova_compute[248203]:   <arch>x86_64</arch>
Nov 26 11:52:16 compute-0 nova_compute[248203]:   <vcpu max='240'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:   <iothreads supported='yes'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:   <os supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <enum name='firmware'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <loader supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='type'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>rom</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>pflash</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='readonly'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>yes</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>no</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='secure'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>no</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </loader>
Nov 26 11:52:16 compute-0 nova_compute[248203]:   </os>
Nov 26 11:52:16 compute-0 nova_compute[248203]:   <cpu>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <mode name='host-passthrough' supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='hostPassthroughMigratable'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>on</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>off</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </mode>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <mode name='maximum' supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='maximumMigratable'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>on</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>off</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </mode>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <mode name='host-model' supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model fallback='forbid'>EPYC-Milan</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <vendor>AMD</vendor>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <maxphysaddr mode='passthrough' limit='48'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='x2apic'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='tsc-deadline'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='hypervisor'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='tsc_adjust'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='vaes'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='vpclmulqdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='spec-ctrl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='stibp'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='ssbd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='cmp_legacy'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='overflow-recov'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='succor'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='virt-ssbd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='lbrv'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='tsc-scale'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='vmcb-clean'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='flushbyasid'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='pause-filter'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='pfthreshold'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='v-vmsave-vmload'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='vgif'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </mode>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <mode name='custom' supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Broadwell'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Broadwell-IBRS'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Broadwell-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Broadwell-v3'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Cascadelake-Server'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Cascadelake-Server-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Cascadelake-Server-v2'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Cascadelake-Server-v3'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Cascadelake-Server-v4'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Cascadelake-Server-v5'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Cooperlake'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='taa-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Cooperlake-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='taa-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Cooperlake-v2'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='taa-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Denverton'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='mpx'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Denverton-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='mpx'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel'>Denverton-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel'>Denverton-v3</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='EPYC-Genoa'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amd-psfd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='auto-ibrs'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512ifma'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='no-nested-data-bp'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='null-sel-clr-base'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='stibp-always-on'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='EPYC-Genoa-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amd-psfd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='auto-ibrs'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512ifma'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='no-nested-data-bp'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='null-sel-clr-base'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='stibp-always-on'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='EPYC-Milan-v2'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amd-psfd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='no-nested-data-bp'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='null-sel-clr-base'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='stibp-always-on'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='AMD'>EPYC-v3</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='AMD'>EPYC-v4</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='GraniteRapids'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-fp16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-int8'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-tile'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx-vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-fp16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512ifma'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='bus-lock-detect'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fbsdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fsrc'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fsrs'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fzrm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='mcdt-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='pbrsb-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='prefetchiti'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='psdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='sbdr-ssdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='serialize'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='taa-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='tsx-ldtrk'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='xfd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='GraniteRapids-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-fp16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-int8'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-tile'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx-vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-fp16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512ifma'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='bus-lock-detect'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fbsdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fsrc'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fsrs'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fzrm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='mcdt-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='pbrsb-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='prefetchiti'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='psdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='sbdr-ssdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='serialize'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='taa-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='tsx-ldtrk'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='xfd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='GraniteRapids-v2'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-fp16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-int8'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-tile'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx-vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx10'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx10-128'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx10-256'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx10-512'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-fp16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512ifma'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='bus-lock-detect'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='cldemote'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fbsdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fsrc'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fsrs'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fzrm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='mcdt-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='movdir64b'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='movdiri'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='pbrsb-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='prefetchiti'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='psdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='sbdr-ssdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='serialize'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ss'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='taa-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='tsx-ldtrk'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='xfd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Haswell'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Haswell-IBRS'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Haswell-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel'>Haswell-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Haswell-v3'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel'>Haswell-v4</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Icelake-Server'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Icelake-Server-noTSX'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Icelake-Server-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Icelake-Server-v2'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Icelake-Server-v3'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='taa-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Icelake-Server-v4'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512ifma'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='taa-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Icelake-Server-v5'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512ifma'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='taa-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Icelake-Server-v6'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512ifma'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='taa-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Icelake-Server-v7'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512ifma'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='taa-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='KnightsMill'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-4fmaps'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-4vnniw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512er'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512pf'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ss'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='KnightsMill-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-4fmaps'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-4vnniw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512er'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512pf'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ss'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Opteron_G4'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fma4'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='xop'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Opteron_G4-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fma4'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='xop'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Opteron_G5'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fma4'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='tbm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='xop'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Opteron_G5-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fma4'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='tbm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='xop'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='SapphireRapids'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-int8'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-tile'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx-vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-fp16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512ifma'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='bus-lock-detect'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fsrc'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fsrs'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fzrm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='serialize'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='taa-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='tsx-ldtrk'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='xfd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='SapphireRapids-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-int8'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-tile'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx-vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-fp16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512ifma'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='bus-lock-detect'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fsrc'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fsrs'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fzrm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='serialize'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='taa-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='tsx-ldtrk'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='xfd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='SapphireRapids-v2'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-int8'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-tile'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx-vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-fp16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512ifma'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='bus-lock-detect'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fbsdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fsrc'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fsrs'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fzrm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='psdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='sbdr-ssdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='serialize'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='taa-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='tsx-ldtrk'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='xfd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='SapphireRapids-v3'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-int8'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='amx-tile'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx-vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-bf16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-fp16'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512-vpopcntdq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bitalg'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512ifma'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vbmi2'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='bus-lock-detect'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='cldemote'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fbsdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fsrc'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fsrs'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fzrm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='la57'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='movdir64b'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='movdiri'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='psdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='sbdr-ssdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='serialize'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ss'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='taa-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='tsx-ldtrk'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='xfd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='SierraForest'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx-ifma'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx-ne-convert'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx-vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx-vnni-int8'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='bus-lock-detect'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='cmpccxadd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fbsdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fsrs'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='mcdt-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='pbrsb-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='psdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='sbdr-ssdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='serialize'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='SierraForest-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx-ifma'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx-ne-convert'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx-vnni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx-vnni-int8'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='bus-lock-detect'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='cmpccxadd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fbsdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='fsrs'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ibrs-all'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='mcdt-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='pbrsb-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='psdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='sbdr-ssdp-no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='serialize'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Skylake-Client'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Skylake-Client-IBRS'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Skylake-Client-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Skylake-Client-v2'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Skylake-Server'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Skylake-Server-IBRS'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Skylake-Server-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Skylake-Server-v2'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='hle'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='rtm'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Skylake-Server-v3'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Skylake-Server-v4'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Skylake-Server-v5'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512bw'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512cd'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512dq'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512f'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='avx512vl'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Snowridge'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='cldemote'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='core-capability'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='movdir64b'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='movdiri'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='mpx'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='split-lock-detect'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Snowridge-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='cldemote'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='core-capability'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='movdir64b'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='movdiri'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='mpx'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='split-lock-detect'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Snowridge-v2'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='cldemote'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='core-capability'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='movdir64b'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='movdiri'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='split-lock-detect'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Snowridge-v3'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='cldemote'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='core-capability'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='movdir64b'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='movdiri'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='split-lock-detect'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='Snowridge-v4'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='cldemote'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='gfni'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='movdir64b'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='movdiri'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='athlon'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='3dnow'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='3dnowext'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='athlon-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='3dnow'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='3dnowext'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='core2duo'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ss'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='core2duo-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ss'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='coreduo'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ss'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='coreduo-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ss'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='n270'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ss'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='n270-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='ss'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='phenom'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='3dnow'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='3dnowext'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <blockers model='phenom-v1'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='3dnow'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <feature name='3dnowext'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </blockers>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </mode>
Nov 26 11:52:16 compute-0 nova_compute[248203]:   </cpu>
Nov 26 11:52:16 compute-0 nova_compute[248203]:   <memoryBacking supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <enum name='sourceType'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <value>file</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <value>anonymous</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <value>memfd</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:   </memoryBacking>
Nov 26 11:52:16 compute-0 nova_compute[248203]:   <devices>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <disk supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='diskDevice'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>disk</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>cdrom</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>floppy</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>lun</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='bus'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>ide</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>fdc</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>scsi</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>virtio</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>usb</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>sata</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='model'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>virtio</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>virtio-transitional</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>virtio-non-transitional</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </disk>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <graphics supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='type'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>vnc</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>egl-headless</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>dbus</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </graphics>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <video supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='modelType'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>vga</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>cirrus</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>virtio</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>none</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>bochs</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>ramfb</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </video>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <hostdev supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='mode'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>subsystem</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='startupPolicy'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>default</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>mandatory</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>requisite</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>optional</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='subsysType'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>usb</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>pci</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>scsi</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='capsType'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='pciBackend'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </hostdev>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <rng supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='model'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>virtio</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>virtio-transitional</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>virtio-non-transitional</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='backendModel'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>random</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>egd</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>builtin</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </rng>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <filesystem supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='driverType'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>path</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>handle</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>virtiofs</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </filesystem>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <tpm supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='model'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>tpm-tis</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>tpm-crb</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='backendModel'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>emulator</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>external</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='backendVersion'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>2.0</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </tpm>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <redirdev supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='bus'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>usb</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </redirdev>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <channel supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='type'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>pty</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>unix</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </channel>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <crypto supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='model'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='type'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>qemu</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='backendModel'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>builtin</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </crypto>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <interface supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='backendType'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>default</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>passt</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </interface>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <panic supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='model'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>isa</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>hyperv</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </panic>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <console supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='type'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>null</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>vc</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>pty</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>dev</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>file</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>pipe</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>stdio</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>udp</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>tcp</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>unix</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>qemu-vdagent</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>dbus</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </console>
Nov 26 11:52:16 compute-0 nova_compute[248203]:   </devices>
Nov 26 11:52:16 compute-0 nova_compute[248203]:   <features>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <gic supported='no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <vmcoreinfo supported='yes'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <genid supported='yes'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <backingStoreInput supported='yes'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <backup supported='yes'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <async-teardown supported='yes'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <ps2 supported='yes'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <sev supported='no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <sgx supported='no'/>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <hyperv supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='features'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>relaxed</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>vapic</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>spinlocks</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>vpindex</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>runtime</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>synic</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>stimer</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>reset</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>vendor_id</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>frequencies</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>reenlightenment</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>tlbflush</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>ipi</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>avic</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>emsr_bitmap</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>xmm_input</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <defaults>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <spinlocks>4095</spinlocks>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <stimer_direct>on</stimer_direct>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <tlbflush_direct>on</tlbflush_direct>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <tlbflush_extended>on</tlbflush_extended>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </defaults>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </hyperv>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     <launchSecurity supported='yes'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       <enum name='sectype'>
Nov 26 11:52:16 compute-0 nova_compute[248203]:         <value>tdx</value>
Nov 26 11:52:16 compute-0 nova_compute[248203]:       </enum>
Nov 26 11:52:16 compute-0 nova_compute[248203]:     </launchSecurity>
Nov 26 11:52:16 compute-0 nova_compute[248203]:   </features>
Nov 26 11:52:16 compute-0 nova_compute[248203]: </domainCapabilities>
Nov 26 11:52:16 compute-0 nova_compute[248203]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.445 248207 DEBUG nova.virt.libvirt.host [None req-7b83ff51-4035-4e48-ae41-937f1113c911 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.445 248207 INFO nova.virt.libvirt.host [None req-7b83ff51-4035-4e48-ae41-937f1113c911 - - - - - -] Secure Boot support detected
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.446 248207 INFO nova.virt.libvirt.driver [None req-7b83ff51-4035-4e48-ae41-937f1113c911 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.446 248207 INFO nova.virt.libvirt.driver [None req-7b83ff51-4035-4e48-ae41-937f1113c911 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.453 248207 DEBUG nova.virt.libvirt.driver [None req-7b83ff51-4035-4e48-ae41-937f1113c911 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.479 248207 INFO nova.virt.node [None req-7b83ff51-4035-4e48-ae41-937f1113c911 - - - - - -] Determined node identity ffdf5b8d-24ca-43b0-a64a-b7345874e7b4 from /var/lib/nova/compute_id
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.496 248207 WARNING nova.compute.manager [None req-7b83ff51-4035-4e48-ae41-937f1113c911 - - - - - -] Compute nodes ['ffdf5b8d-24ca-43b0-a64a-b7345874e7b4'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.518 248207 INFO nova.compute.manager [None req-7b83ff51-4035-4e48-ae41-937f1113c911 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.555 248207 WARNING nova.compute.manager [None req-7b83ff51-4035-4e48-ae41-937f1113c911 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.555 248207 DEBUG oslo_concurrency.lockutils [None req-7b83ff51-4035-4e48-ae41-937f1113c911 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.555 248207 DEBUG oslo_concurrency.lockutils [None req-7b83ff51-4035-4e48-ae41-937f1113c911 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.555 248207 DEBUG oslo_concurrency.lockutils [None req-7b83ff51-4035-4e48-ae41-937f1113c911 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.556 248207 DEBUG nova.compute.resource_tracker [None req-7b83ff51-4035-4e48-ae41-937f1113c911 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.556 248207 DEBUG oslo_concurrency.processutils [None req-7b83ff51-4035-4e48-ae41-937f1113c911 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 26 11:52:16 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:52:16 compute-0 ceph-mon[74928]: pgmap v511: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:52:16 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 11:52:16 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1018343364' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 11:52:16 compute-0 nova_compute[248203]: 2025-11-26 11:52:16.888 248207 DEBUG oslo_concurrency.processutils [None req-7b83ff51-4035-4e48-ae41-937f1113c911 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.332s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 26 11:52:16 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Nov 26 11:52:16 compute-0 systemd[1]: Started libvirt nodedev daemon.
Nov 26 11:52:17 compute-0 nova_compute[248203]: 2025-11-26 11:52:17.238 248207 WARNING nova.virt.libvirt.driver [None req-7b83ff51-4035-4e48-ae41-937f1113c911 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 26 11:52:17 compute-0 nova_compute[248203]: 2025-11-26 11:52:17.239 248207 DEBUG nova.compute.resource_tracker [None req-7b83ff51-4035-4e48-ae41-937f1113c911 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5213MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 26 11:52:17 compute-0 nova_compute[248203]: 2025-11-26 11:52:17.239 248207 DEBUG oslo_concurrency.lockutils [None req-7b83ff51-4035-4e48-ae41-937f1113c911 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 11:52:17 compute-0 nova_compute[248203]: 2025-11-26 11:52:17.240 248207 DEBUG oslo_concurrency.lockutils [None req-7b83ff51-4035-4e48-ae41-937f1113c911 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 11:52:17 compute-0 nova_compute[248203]: 2025-11-26 11:52:17.256 248207 WARNING nova.compute.resource_tracker [None req-7b83ff51-4035-4e48-ae41-937f1113c911 - - - - - -] No compute node record for compute-0.ctlplane.example.com:ffdf5b8d-24ca-43b0-a64a-b7345874e7b4: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host ffdf5b8d-24ca-43b0-a64a-b7345874e7b4 could not be found.
Nov 26 11:52:17 compute-0 nova_compute[248203]: 2025-11-26 11:52:17.269 248207 INFO nova.compute.resource_tracker [None req-7b83ff51-4035-4e48-ae41-937f1113c911 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: ffdf5b8d-24ca-43b0-a64a-b7345874e7b4
Nov 26 11:52:17 compute-0 nova_compute[248203]: 2025-11-26 11:52:17.312 248207 DEBUG nova.compute.resource_tracker [None req-7b83ff51-4035-4e48-ae41-937f1113c911 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 26 11:52:17 compute-0 nova_compute[248203]: 2025-11-26 11:52:17.312 248207 DEBUG nova.compute.resource_tracker [None req-7b83ff51-4035-4e48-ae41-937f1113c911 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 26 11:52:17 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v512: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:52:17 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1018343364' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 11:52:18 compute-0 nova_compute[248203]: 2025-11-26 11:52:18.210 248207 INFO nova.scheduler.client.report [None req-7b83ff51-4035-4e48-ae41-937f1113c911 - - - - - -] [req-9aca2cd6-5d53-4d60-843e-fbd8beac7365] Created resource provider record via placement API for resource provider with UUID ffdf5b8d-24ca-43b0-a64a-b7345874e7b4 and name compute-0.ctlplane.example.com.
Nov 26 11:52:18 compute-0 nova_compute[248203]: 2025-11-26 11:52:18.627 248207 DEBUG oslo_concurrency.processutils [None req-7b83ff51-4035-4e48-ae41-937f1113c911 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 26 11:52:18 compute-0 ceph-mon[74928]: pgmap v512: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:52:18 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 11:52:18 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/201815580' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 11:52:18 compute-0 nova_compute[248203]: 2025-11-26 11:52:18.960 248207 DEBUG oslo_concurrency.processutils [None req-7b83ff51-4035-4e48-ae41-937f1113c911 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.333s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 26 11:52:18 compute-0 nova_compute[248203]: 2025-11-26 11:52:18.964 248207 DEBUG nova.virt.libvirt.host [None req-7b83ff51-4035-4e48-ae41-937f1113c911 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Nov 26 11:52:18 compute-0 nova_compute[248203]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Nov 26 11:52:18 compute-0 nova_compute[248203]: 2025-11-26 11:52:18.964 248207 INFO nova.virt.libvirt.host [None req-7b83ff51-4035-4e48-ae41-937f1113c911 - - - - - -] kernel doesn't support AMD SEV
Nov 26 11:52:18 compute-0 nova_compute[248203]: 2025-11-26 11:52:18.965 248207 DEBUG nova.compute.provider_tree [None req-7b83ff51-4035-4e48-ae41-937f1113c911 - - - - - -] Updating inventory in ProviderTree for provider ffdf5b8d-24ca-43b0-a64a-b7345874e7b4 with inventory: {'MEMORY_MB': {'total': 7681, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 4, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 26 11:52:18 compute-0 nova_compute[248203]: 2025-11-26 11:52:18.965 248207 DEBUG nova.virt.libvirt.driver [None req-7b83ff51-4035-4e48-ae41-937f1113c911 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 26 11:52:19 compute-0 nova_compute[248203]: 2025-11-26 11:52:19.017 248207 DEBUG nova.scheduler.client.report [None req-7b83ff51-4035-4e48-ae41-937f1113c911 - - - - - -] Updated inventory for provider ffdf5b8d-24ca-43b0-a64a-b7345874e7b4 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7681, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 4, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Nov 26 11:52:19 compute-0 nova_compute[248203]: 2025-11-26 11:52:19.017 248207 DEBUG nova.compute.provider_tree [None req-7b83ff51-4035-4e48-ae41-937f1113c911 - - - - - -] Updating resource provider ffdf5b8d-24ca-43b0-a64a-b7345874e7b4 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Nov 26 11:52:19 compute-0 nova_compute[248203]: 2025-11-26 11:52:19.018 248207 DEBUG nova.compute.provider_tree [None req-7b83ff51-4035-4e48-ae41-937f1113c911 - - - - - -] Updating inventory in ProviderTree for provider ffdf5b8d-24ca-43b0-a64a-b7345874e7b4 with inventory: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 26 11:52:19 compute-0 nova_compute[248203]: 2025-11-26 11:52:19.102 248207 DEBUG nova.compute.provider_tree [None req-7b83ff51-4035-4e48-ae41-937f1113c911 - - - - - -] Updating resource provider ffdf5b8d-24ca-43b0-a64a-b7345874e7b4 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Nov 26 11:52:19 compute-0 nova_compute[248203]: 2025-11-26 11:52:19.119 248207 DEBUG nova.compute.resource_tracker [None req-7b83ff51-4035-4e48-ae41-937f1113c911 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 26 11:52:19 compute-0 nova_compute[248203]: 2025-11-26 11:52:19.119 248207 DEBUG oslo_concurrency.lockutils [None req-7b83ff51-4035-4e48-ae41-937f1113c911 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.879s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 11:52:19 compute-0 nova_compute[248203]: 2025-11-26 11:52:19.119 248207 DEBUG nova.service [None req-7b83ff51-4035-4e48-ae41-937f1113c911 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Nov 26 11:52:19 compute-0 nova_compute[248203]: 2025-11-26 11:52:19.166 248207 DEBUG nova.service [None req-7b83ff51-4035-4e48-ae41-937f1113c911 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Nov 26 11:52:19 compute-0 nova_compute[248203]: 2025-11-26 11:52:19.166 248207 DEBUG nova.servicegroup.drivers.db [None req-7b83ff51-4035-4e48-ae41-937f1113c911 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Nov 26 11:52:19 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v513: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:52:19 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/201815580' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 11:52:20 compute-0 ceph-mon[74928]: pgmap v513: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:52:21 compute-0 nova_compute[248203]: 2025-11-26 11:52:21.168 248207 DEBUG oslo_service.periodic_task [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 11:52:21 compute-0 nova_compute[248203]: 2025-11-26 11:52:21.183 248207 DEBUG oslo_service.periodic_task [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 11:52:21 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:52:21 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v514: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:52:22 compute-0 ceph-mon[74928]: pgmap v514: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:52:23 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v515: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:52:24 compute-0 ceph-mon[74928]: pgmap v515: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:52:25 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v516: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:52:26 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:52:26 compute-0 ceph-mon[74928]: pgmap v516: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:52:27 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v517: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:52:28 compute-0 ceph-mon[74928]: pgmap v517: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:52:29 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v518: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:52:30 compute-0 ceph-mon[74928]: pgmap v518: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:52:31 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:52:31 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v519: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:52:32 compute-0 ceph-mon[74928]: pgmap v519: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:52:33 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v520: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:52:33 compute-0 podman[248664]: 2025-11-26 11:52:33.615405332 +0000 UTC m=+0.040643120 container health_status b46e4f533fc070f3c487ba2bec68d1eecba141ffd4a1dcc9b4c347110dd51e0e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3)
Nov 26 11:52:34 compute-0 ceph-mon[74928]: pgmap v520: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:52:35 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 11:52:35 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1525024419' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 11:52:35 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 11:52:35 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1525024419' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 11:52:35 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 11:52:35 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/690004198' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 11:52:35 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 11:52:35 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/690004198' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 11:52:35 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v521: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:52:35 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 11:52:35 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1558419395' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 11:52:35 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 11:52:35 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1558419395' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 11:52:35 compute-0 ceph-mon[74928]: from='client.? 192.168.122.10:0/1525024419' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 11:52:35 compute-0 ceph-mon[74928]: from='client.? 192.168.122.10:0/1525024419' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 11:52:35 compute-0 ceph-mon[74928]: from='client.? 192.168.122.10:0/690004198' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 11:52:35 compute-0 ceph-mon[74928]: from='client.? 192.168.122.10:0/690004198' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 11:52:35 compute-0 ceph-mon[74928]: from='client.? 192.168.122.10:0/1558419395' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 11:52:35 compute-0 ceph-mon[74928]: from='client.? 192.168.122.10:0/1558419395' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 11:52:36 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:52:36 compute-0 podman[248681]: 2025-11-26 11:52:36.607133255 +0000 UTC m=+0.033243997 container health_status 5f401b83ca5f86ed33da7d010b1da1561564f07ce95b2e756c93a36d59e54803 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible)
Nov 26 11:52:36 compute-0 ceph-mon[74928]: pgmap v521: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:52:36 compute-0 ceph-mon[74928]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Nov 26 11:52:36 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:52:36.809002) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 26 11:52:36 compute-0 ceph-mon[74928]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Nov 26 11:52:36 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764157956809032, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2049, "num_deletes": 251, "total_data_size": 3490365, "memory_usage": 3542728, "flush_reason": "Manual Compaction"}
Nov 26 11:52:36 compute-0 ceph-mon[74928]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Nov 26 11:52:36 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764157956815458, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 3404500, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9675, "largest_seqno": 11723, "table_properties": {"data_size": 3395169, "index_size": 5889, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18049, "raw_average_key_size": 19, "raw_value_size": 3376694, "raw_average_value_size": 3650, "num_data_blocks": 267, "num_entries": 925, "num_filter_entries": 925, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764157729, "oldest_key_time": 1764157729, "file_creation_time": 1764157956, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "363c2a1d-8d28-40b7-a8ff-7233f1c9b7d5", "db_session_id": "CJT49RLFB1C6KNYXG0ER", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Nov 26 11:52:36 compute-0 ceph-mon[74928]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 6477 microseconds, and 4726 cpu microseconds.
Nov 26 11:52:36 compute-0 ceph-mon[74928]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 11:52:36 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:52:36.815480) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 3404500 bytes OK
Nov 26 11:52:36 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:52:36.815490) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Nov 26 11:52:36 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:52:36.815811) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Nov 26 11:52:36 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:52:36.815820) EVENT_LOG_v1 {"time_micros": 1764157956815817, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 26 11:52:36 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:52:36.815828) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 26 11:52:36 compute-0 ceph-mon[74928]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3481795, prev total WAL file size 3481795, number of live WAL files 2.
Nov 26 11:52:36 compute-0 ceph-mon[74928]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 11:52:36 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:52:36.816416) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Nov 26 11:52:36 compute-0 ceph-mon[74928]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 26 11:52:36 compute-0 ceph-mon[74928]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(3324KB)], [26(6070KB)]
Nov 26 11:52:36 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764157956816439, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 9620250, "oldest_snapshot_seqno": -1}
Nov 26 11:52:36 compute-0 ceph-mon[74928]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3693 keys, 8032241 bytes, temperature: kUnknown
Nov 26 11:52:36 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764157956831795, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 8032241, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8003679, "index_size": 18212, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9285, "raw_key_size": 88703, "raw_average_key_size": 24, "raw_value_size": 7933191, "raw_average_value_size": 2148, "num_data_blocks": 790, "num_entries": 3693, "num_filter_entries": 3693, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764157079, "oldest_key_time": 0, "file_creation_time": 1764157956, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "363c2a1d-8d28-40b7-a8ff-7233f1c9b7d5", "db_session_id": "CJT49RLFB1C6KNYXG0ER", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Nov 26 11:52:36 compute-0 ceph-mon[74928]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 11:52:36 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:52:36.831908) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 8032241 bytes
Nov 26 11:52:36 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:52:36.832211) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 625.0 rd, 521.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 5.9 +0.0 blob) out(7.7 +0.0 blob), read-write-amplify(5.2) write-amplify(2.4) OK, records in: 4207, records dropped: 514 output_compression: NoCompression
Nov 26 11:52:36 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:52:36.832225) EVENT_LOG_v1 {"time_micros": 1764157956832219, "job": 10, "event": "compaction_finished", "compaction_time_micros": 15393, "compaction_time_cpu_micros": 12770, "output_level": 6, "num_output_files": 1, "total_output_size": 8032241, "num_input_records": 4207, "num_output_records": 3693, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 26 11:52:36 compute-0 ceph-mon[74928]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 11:52:36 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764157956832609, "job": 10, "event": "table_file_deletion", "file_number": 28}
Nov 26 11:52:36 compute-0 ceph-mon[74928]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 11:52:36 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764157956833276, "job": 10, "event": "table_file_deletion", "file_number": 26}
Nov 26 11:52:36 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:52:36.816367) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 11:52:36 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:52:36.833293) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 11:52:36 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:52:36.833296) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 11:52:36 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:52:36.833297) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 11:52:36 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:52:36.833298) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 11:52:36 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:52:36.833299) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 11:52:37 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v522: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:52:38 compute-0 ceph-mon[74928]: pgmap v522: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:52:39 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v523: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:52:40 compute-0 ceph-mon[74928]: pgmap v523: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:52:41 compute-0 ceph-mgr[75197]: [balancer INFO root] Optimize plan auto_2025-11-26_11:52:41
Nov 26 11:52:41 compute-0 ceph-mgr[75197]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 11:52:41 compute-0 ceph-mgr[75197]: [balancer INFO root] do_upmap
Nov 26 11:52:41 compute-0 ceph-mgr[75197]: [balancer INFO root] pools ['default.rgw.control', 'volumes', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.log', 'images', 'default.rgw.meta', 'backups', 'cephfs.cephfs.data', '.mgr', 'vms']
Nov 26 11:52:41 compute-0 ceph-mgr[75197]: [balancer INFO root] prepared 0/10 changes
Nov 26 11:52:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:52:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:52:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:52:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:52:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:52:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:52:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 11:52:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 11:52:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 11:52:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 11:52:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 11:52:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 11:52:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 11:52:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 11:52:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 11:52:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 11:52:41 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:52:41 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v524: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:52:42 compute-0 ceph-mon[74928]: pgmap v524: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:52:43 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v525: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:52:44 compute-0 ceph-mon[74928]: pgmap v525: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:52:45 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v526: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:52:46 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:52:46 compute-0 podman[248697]: 2025-11-26 11:52:46.631281989 +0000 UTC m=+0.056976866 container health_status cfbeb1268b73bea59211ec8c025ee3818e11127880f6c0675b5fbea39bdd577e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 26 11:52:46 compute-0 ceph-mon[74928]: pgmap v526: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:52:47 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v527: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:52:48 compute-0 ceph-mon[74928]: pgmap v527: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:52:49 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v528: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:52:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 11:52:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:52:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 11:52:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:52:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:52:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:52:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:52:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:52:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:52:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:52:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:52:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:52:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 26 11:52:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:52:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:52:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:52:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 11:52:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:52:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 11:52:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:52:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:52:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:52:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 11:52:50 compute-0 ceph-mon[74928]: pgmap v528: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:52:51 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:52:51 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v529: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:52:52 compute-0 ceph-mon[74928]: pgmap v529: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:52:53 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v530: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:52:54 compute-0 ceph-mon[74928]: pgmap v530: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:52:55 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v531: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:52:56 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:52:56 compute-0 ceph-mon[74928]: pgmap v531: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:52:57 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v532: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:52:57 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Nov 26 11:52:57 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/658854473' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 26 11:52:57 compute-0 ceph-mgr[75197]: log_channel(audit) log [DBG] : from='client.14331 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 26 11:52:57 compute-0 ceph-mgr[75197]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 26 11:52:57 compute-0 ceph-mgr[75197]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 26 11:52:58 compute-0 ceph-mon[74928]: pgmap v532: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:52:58 compute-0 ceph-mon[74928]: from='client.? 192.168.122.10:0/658854473' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 26 11:52:59 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v533: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:52:59 compute-0 ceph-mon[74928]: from='client.14331 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 26 11:53:00 compute-0 ceph-mon[74928]: pgmap v533: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:53:01 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:53:01 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v534: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:53:02 compute-0 ceph-mon[74928]: pgmap v534: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:53:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:53:02.986 159928 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 11:53:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:53:02.986 159928 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 11:53:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:53:02.986 159928 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 11:53:03 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v535: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:53:04 compute-0 podman[248720]: 2025-11-26 11:53:04.618271191 +0000 UTC m=+0.043097278 container health_status b46e4f533fc070f3c487ba2bec68d1eecba141ffd4a1dcc9b4c347110dd51e0e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 11:53:04 compute-0 ceph-mon[74928]: pgmap v535: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:53:05 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v536: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:53:06 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:53:06 compute-0 ceph-mon[74928]: pgmap v536: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:53:07 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v537: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:53:07 compute-0 podman[248737]: 2025-11-26 11:53:07.639154049 +0000 UTC m=+0.065188771 container health_status 5f401b83ca5f86ed33da7d010b1da1561564f07ce95b2e756c93a36d59e54803 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 26 11:53:08 compute-0 ceph-mon[74928]: pgmap v537: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:53:09 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v538: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:53:10 compute-0 ceph-mon[74928]: pgmap v538: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:53:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:53:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:53:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:53:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:53:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:53:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:53:11 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:53:11 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v539: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:53:12 compute-0 ceph-mon[74928]: pgmap v539: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:53:13 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v540: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:53:13 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Nov 26 11:53:13 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3305115390' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 26 11:53:13 compute-0 ceph-mgr[75197]: log_channel(audit) log [DBG] : from='client.14349 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 26 11:53:13 compute-0 ceph-mgr[75197]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 26 11:53:13 compute-0 ceph-mgr[75197]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 26 11:53:13 compute-0 ceph-mon[74928]: from='client.? 192.168.122.10:0/3305115390' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 26 11:53:14 compute-0 sudo[248754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:53:14 compute-0 sudo[248754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:53:14 compute-0 sudo[248754]: pam_unix(sudo:session): session closed for user root
Nov 26 11:53:14 compute-0 sudo[248779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:53:14 compute-0 sudo[248779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:53:14 compute-0 sudo[248779]: pam_unix(sudo:session): session closed for user root
Nov 26 11:53:14 compute-0 sudo[248804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:53:14 compute-0 sudo[248804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:53:14 compute-0 sudo[248804]: pam_unix(sudo:session): session closed for user root
Nov 26 11:53:14 compute-0 sudo[248829]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 26 11:53:14 compute-0 sudo[248829]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:53:14 compute-0 ceph-mon[74928]: pgmap v540: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:53:14 compute-0 ceph-mon[74928]: from='client.14349 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 26 11:53:15 compute-0 sudo[248829]: pam_unix(sudo:session): session closed for user root
Nov 26 11:53:15 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 11:53:15 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:53:15 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 11:53:15 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 11:53:15 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 11:53:15 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:53:15 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 3c757096-2d04-4ee1-9862-f17eb6c30066 does not exist
Nov 26 11:53:15 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev c2df48e7-75d8-4933-bc6b-270d72edb564 does not exist
Nov 26 11:53:15 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 97d0282b-1822-48e1-9d25-4e9deb974291 does not exist
Nov 26 11:53:15 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 11:53:15 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 11:53:15 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 11:53:15 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 11:53:15 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 11:53:15 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:53:15 compute-0 sudo[248883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:53:15 compute-0 sudo[248883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:53:15 compute-0 sudo[248883]: pam_unix(sudo:session): session closed for user root
Nov 26 11:53:15 compute-0 sudo[248908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:53:15 compute-0 sudo[248908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:53:15 compute-0 sudo[248908]: pam_unix(sudo:session): session closed for user root
Nov 26 11:53:15 compute-0 sudo[248933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:53:15 compute-0 sudo[248933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:53:15 compute-0 sudo[248933]: pam_unix(sudo:session): session closed for user root
Nov 26 11:53:15 compute-0 sudo[248958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 26 11:53:15 compute-0 sudo[248958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:53:15 compute-0 podman[249015]: 2025-11-26 11:53:15.562741298 +0000 UTC m=+0.024883731 container create e70464f44c87eb5e5958a6f7f7fddc5ed8d70beb86673692ebe79cac197a15d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_roentgen, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:53:15 compute-0 systemd[1]: Started libpod-conmon-e70464f44c87eb5e5958a6f7f7fddc5ed8d70beb86673692ebe79cac197a15d0.scope.
Nov 26 11:53:15 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v541: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:53:15 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:53:15 compute-0 podman[249015]: 2025-11-26 11:53:15.621763338 +0000 UTC m=+0.083905761 container init e70464f44c87eb5e5958a6f7f7fddc5ed8d70beb86673692ebe79cac197a15d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_roentgen, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 26 11:53:15 compute-0 podman[249015]: 2025-11-26 11:53:15.626229014 +0000 UTC m=+0.088371437 container start e70464f44c87eb5e5958a6f7f7fddc5ed8d70beb86673692ebe79cac197a15d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_roentgen, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 26 11:53:15 compute-0 nova_compute[248203]: 2025-11-26 11:53:15.626 248207 DEBUG oslo_service.periodic_task [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 11:53:15 compute-0 nova_compute[248203]: 2025-11-26 11:53:15.627 248207 DEBUG oslo_service.periodic_task [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 11:53:15 compute-0 nova_compute[248203]: 2025-11-26 11:53:15.627 248207 DEBUG nova.compute.manager [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 26 11:53:15 compute-0 nova_compute[248203]: 2025-11-26 11:53:15.627 248207 DEBUG nova.compute.manager [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 26 11:53:15 compute-0 podman[249015]: 2025-11-26 11:53:15.62854286 +0000 UTC m=+0.090685283 container attach e70464f44c87eb5e5958a6f7f7fddc5ed8d70beb86673692ebe79cac197a15d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_roentgen, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 11:53:15 compute-0 fervent_roentgen[249028]: 167 167
Nov 26 11:53:15 compute-0 systemd[1]: libpod-e70464f44c87eb5e5958a6f7f7fddc5ed8d70beb86673692ebe79cac197a15d0.scope: Deactivated successfully.
Nov 26 11:53:15 compute-0 podman[249015]: 2025-11-26 11:53:15.630278487 +0000 UTC m=+0.092420949 container died e70464f44c87eb5e5958a6f7f7fddc5ed8d70beb86673692ebe79cac197a15d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_roentgen, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:53:15 compute-0 nova_compute[248203]: 2025-11-26 11:53:15.640 248207 DEBUG nova.compute.manager [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 26 11:53:15 compute-0 nova_compute[248203]: 2025-11-26 11:53:15.640 248207 DEBUG oslo_service.periodic_task [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 11:53:15 compute-0 nova_compute[248203]: 2025-11-26 11:53:15.640 248207 DEBUG oslo_service.periodic_task [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 11:53:15 compute-0 nova_compute[248203]: 2025-11-26 11:53:15.640 248207 DEBUG oslo_service.periodic_task [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 11:53:15 compute-0 nova_compute[248203]: 2025-11-26 11:53:15.641 248207 DEBUG oslo_service.periodic_task [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 11:53:15 compute-0 nova_compute[248203]: 2025-11-26 11:53:15.641 248207 DEBUG oslo_service.periodic_task [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 11:53:15 compute-0 nova_compute[248203]: 2025-11-26 11:53:15.641 248207 DEBUG oslo_service.periodic_task [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 11:53:15 compute-0 nova_compute[248203]: 2025-11-26 11:53:15.641 248207 DEBUG nova.compute.manager [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 26 11:53:15 compute-0 nova_compute[248203]: 2025-11-26 11:53:15.641 248207 DEBUG oslo_service.periodic_task [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 11:53:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-134901a40a34a0495c5e04dddd2d3679d1134259ba38bb23e6169be5f001f16d-merged.mount: Deactivated successfully.
Nov 26 11:53:15 compute-0 podman[249015]: 2025-11-26 11:53:15.552308349 +0000 UTC m=+0.014450793 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:53:15 compute-0 podman[249015]: 2025-11-26 11:53:15.650365447 +0000 UTC m=+0.112507870 container remove e70464f44c87eb5e5958a6f7f7fddc5ed8d70beb86673692ebe79cac197a15d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_roentgen, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:53:15 compute-0 nova_compute[248203]: 2025-11-26 11:53:15.658 248207 DEBUG oslo_concurrency.lockutils [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 11:53:15 compute-0 nova_compute[248203]: 2025-11-26 11:53:15.659 248207 DEBUG oslo_concurrency.lockutils [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 11:53:15 compute-0 nova_compute[248203]: 2025-11-26 11:53:15.659 248207 DEBUG oslo_concurrency.lockutils [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 11:53:15 compute-0 nova_compute[248203]: 2025-11-26 11:53:15.659 248207 DEBUG nova.compute.resource_tracker [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 26 11:53:15 compute-0 nova_compute[248203]: 2025-11-26 11:53:15.660 248207 DEBUG oslo_concurrency.processutils [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 26 11:53:15 compute-0 systemd[1]: libpod-conmon-e70464f44c87eb5e5958a6f7f7fddc5ed8d70beb86673692ebe79cac197a15d0.scope: Deactivated successfully.
Nov 26 11:53:15 compute-0 podman[249050]: 2025-11-26 11:53:15.768143267 +0000 UTC m=+0.027729247 container create af32c68b5f8c53eefe6c463e61b000742488cc99195ff556db19fcd69a78d02d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bohr, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 26 11:53:15 compute-0 systemd[1]: Started libpod-conmon-af32c68b5f8c53eefe6c463e61b000742488cc99195ff556db19fcd69a78d02d.scope.
Nov 26 11:53:15 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:53:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eeb648bb80d76cfc6c2610b6666ca770d9edad0c56d3b9c717197e18dd9974e4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:53:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eeb648bb80d76cfc6c2610b6666ca770d9edad0c56d3b9c717197e18dd9974e4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:53:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eeb648bb80d76cfc6c2610b6666ca770d9edad0c56d3b9c717197e18dd9974e4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:53:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eeb648bb80d76cfc6c2610b6666ca770d9edad0c56d3b9c717197e18dd9974e4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:53:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eeb648bb80d76cfc6c2610b6666ca770d9edad0c56d3b9c717197e18dd9974e4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 11:53:15 compute-0 podman[249050]: 2025-11-26 11:53:15.822796805 +0000 UTC m=+0.082382795 container init af32c68b5f8c53eefe6c463e61b000742488cc99195ff556db19fcd69a78d02d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 26 11:53:15 compute-0 podman[249050]: 2025-11-26 11:53:15.828470195 +0000 UTC m=+0.088056164 container start af32c68b5f8c53eefe6c463e61b000742488cc99195ff556db19fcd69a78d02d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 26 11:53:15 compute-0 podman[249050]: 2025-11-26 11:53:15.829668139 +0000 UTC m=+0.089254129 container attach af32c68b5f8c53eefe6c463e61b000742488cc99195ff556db19fcd69a78d02d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bohr, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 26 11:53:15 compute-0 podman[249050]: 2025-11-26 11:53:15.757896579 +0000 UTC m=+0.017482570 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:53:15 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:53:15 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 11:53:15 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:53:15 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 11:53:15 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 11:53:15 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:53:15 compute-0 ceph-mon[74928]: pgmap v541: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:53:15 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 11:53:15 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3956336318' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 11:53:16 compute-0 nova_compute[248203]: 2025-11-26 11:53:16.000 248207 DEBUG oslo_concurrency.processutils [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.341s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 26 11:53:16 compute-0 nova_compute[248203]: 2025-11-26 11:53:16.254 248207 WARNING nova.virt.libvirt.driver [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 26 11:53:16 compute-0 nova_compute[248203]: 2025-11-26 11:53:16.255 248207 DEBUG nova.compute.resource_tracker [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5202MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 26 11:53:16 compute-0 nova_compute[248203]: 2025-11-26 11:53:16.255 248207 DEBUG oslo_concurrency.lockutils [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 11:53:16 compute-0 nova_compute[248203]: 2025-11-26 11:53:16.255 248207 DEBUG oslo_concurrency.lockutils [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 11:53:16 compute-0 nova_compute[248203]: 2025-11-26 11:53:16.309 248207 DEBUG nova.compute.resource_tracker [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 26 11:53:16 compute-0 nova_compute[248203]: 2025-11-26 11:53:16.310 248207 DEBUG nova.compute.resource_tracker [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 26 11:53:16 compute-0 nova_compute[248203]: 2025-11-26 11:53:16.321 248207 DEBUG oslo_concurrency.processutils [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 26 11:53:16 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:53:16 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 11:53:16 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3692038309' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 11:53:16 compute-0 nova_compute[248203]: 2025-11-26 11:53:16.650 248207 DEBUG oslo_concurrency.processutils [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.329s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 26 11:53:16 compute-0 nova_compute[248203]: 2025-11-26 11:53:16.655 248207 DEBUG nova.compute.provider_tree [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Inventory has not changed in ProviderTree for provider: ffdf5b8d-24ca-43b0-a64a-b7345874e7b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 26 11:53:16 compute-0 nova_compute[248203]: 2025-11-26 11:53:16.671 248207 DEBUG nova.scheduler.client.report [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Inventory has not changed for provider ffdf5b8d-24ca-43b0-a64a-b7345874e7b4 based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 26 11:53:16 compute-0 nova_compute[248203]: 2025-11-26 11:53:16.672 248207 DEBUG nova.compute.resource_tracker [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 26 11:53:16 compute-0 nova_compute[248203]: 2025-11-26 11:53:16.672 248207 DEBUG oslo_concurrency.lockutils [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.417s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 11:53:16 compute-0 affectionate_bohr[249083]: --> passed data devices: 0 physical, 3 LVM
Nov 26 11:53:16 compute-0 affectionate_bohr[249083]: --> relative data size: 1.0
Nov 26 11:53:16 compute-0 affectionate_bohr[249083]: --> All data devices are unavailable
Nov 26 11:53:16 compute-0 systemd[1]: libpod-af32c68b5f8c53eefe6c463e61b000742488cc99195ff556db19fcd69a78d02d.scope: Deactivated successfully.
Nov 26 11:53:16 compute-0 podman[249050]: 2025-11-26 11:53:16.705109331 +0000 UTC m=+0.964695301 container died af32c68b5f8c53eefe6c463e61b000742488cc99195ff556db19fcd69a78d02d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bohr, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:53:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-eeb648bb80d76cfc6c2610b6666ca770d9edad0c56d3b9c717197e18dd9974e4-merged.mount: Deactivated successfully.
Nov 26 11:53:16 compute-0 podman[249050]: 2025-11-26 11:53:16.749231645 +0000 UTC m=+1.008817616 container remove af32c68b5f8c53eefe6c463e61b000742488cc99195ff556db19fcd69a78d02d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bohr, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:53:16 compute-0 systemd[1]: libpod-conmon-af32c68b5f8c53eefe6c463e61b000742488cc99195ff556db19fcd69a78d02d.scope: Deactivated successfully.
Nov 26 11:53:16 compute-0 sudo[248958]: pam_unix(sudo:session): session closed for user root
Nov 26 11:53:16 compute-0 podman[249136]: 2025-11-26 11:53:16.806224305 +0000 UTC m=+0.074540624 container health_status cfbeb1268b73bea59211ec8c025ee3818e11127880f6c0675b5fbea39bdd577e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 26 11:53:16 compute-0 sudo[249162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:53:16 compute-0 sudo[249162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:53:16 compute-0 sudo[249162]: pam_unix(sudo:session): session closed for user root
Nov 26 11:53:16 compute-0 sudo[249193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:53:16 compute-0 sudo[249193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:53:16 compute-0 sudo[249193]: pam_unix(sudo:session): session closed for user root
Nov 26 11:53:16 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3956336318' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 11:53:16 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3692038309' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 11:53:16 compute-0 sudo[249218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:53:16 compute-0 sudo[249218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:53:16 compute-0 sudo[249218]: pam_unix(sudo:session): session closed for user root
Nov 26 11:53:16 compute-0 sudo[249243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -- lvm list --format json
Nov 26 11:53:16 compute-0 sudo[249243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:53:17 compute-0 podman[249299]: 2025-11-26 11:53:17.181970294 +0000 UTC m=+0.028710284 container create 8fa6649b29f9d82b7b62c2c5a93ab2deda66e50b6ccbea1c9e07e9eb505bdef9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:53:17 compute-0 systemd[1]: Started libpod-conmon-8fa6649b29f9d82b7b62c2c5a93ab2deda66e50b6ccbea1c9e07e9eb505bdef9.scope.
Nov 26 11:53:17 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:53:17 compute-0 podman[249299]: 2025-11-26 11:53:17.236849566 +0000 UTC m=+0.083589556 container init 8fa6649b29f9d82b7b62c2c5a93ab2deda66e50b6ccbea1c9e07e9eb505bdef9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True)
Nov 26 11:53:17 compute-0 podman[249299]: 2025-11-26 11:53:17.241237456 +0000 UTC m=+0.087977446 container start 8fa6649b29f9d82b7b62c2c5a93ab2deda66e50b6ccbea1c9e07e9eb505bdef9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 26 11:53:17 compute-0 podman[249299]: 2025-11-26 11:53:17.242380306 +0000 UTC m=+0.089120297 container attach 8fa6649b29f9d82b7b62c2c5a93ab2deda66e50b6ccbea1c9e07e9eb505bdef9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_noyce, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:53:17 compute-0 bold_noyce[249312]: 167 167
Nov 26 11:53:17 compute-0 systemd[1]: libpod-8fa6649b29f9d82b7b62c2c5a93ab2deda66e50b6ccbea1c9e07e9eb505bdef9.scope: Deactivated successfully.
Nov 26 11:53:17 compute-0 podman[249299]: 2025-11-26 11:53:17.171161658 +0000 UTC m=+0.017901658 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:53:17 compute-0 podman[249317]: 2025-11-26 11:53:17.273731 +0000 UTC m=+0.015493296 container died 8fa6649b29f9d82b7b62c2c5a93ab2deda66e50b6ccbea1c9e07e9eb505bdef9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_noyce, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:53:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c888f34f22e95ba155a90600cabf031947385287338cdfdef4a941f3041b886-merged.mount: Deactivated successfully.
Nov 26 11:53:17 compute-0 podman[249317]: 2025-11-26 11:53:17.292043711 +0000 UTC m=+0.033805987 container remove 8fa6649b29f9d82b7b62c2c5a93ab2deda66e50b6ccbea1c9e07e9eb505bdef9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_noyce, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 26 11:53:17 compute-0 systemd[1]: libpod-conmon-8fa6649b29f9d82b7b62c2c5a93ab2deda66e50b6ccbea1c9e07e9eb505bdef9.scope: Deactivated successfully.
Nov 26 11:53:17 compute-0 podman[249335]: 2025-11-26 11:53:17.411228439 +0000 UTC m=+0.027103760 container create 66a7a52cc882d4e69b9b5cebcd47712d4a4e578f43e039146ce466febd3c5d22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:53:17 compute-0 systemd[1]: Started libpod-conmon-66a7a52cc882d4e69b9b5cebcd47712d4a4e578f43e039146ce466febd3c5d22.scope.
Nov 26 11:53:17 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:53:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6512f5740c9365012196dd7e7df78a35faa881b6ee3016cab55362b31a252918/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:53:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6512f5740c9365012196dd7e7df78a35faa881b6ee3016cab55362b31a252918/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:53:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6512f5740c9365012196dd7e7df78a35faa881b6ee3016cab55362b31a252918/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:53:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6512f5740c9365012196dd7e7df78a35faa881b6ee3016cab55362b31a252918/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:53:17 compute-0 podman[249335]: 2025-11-26 11:53:17.474803189 +0000 UTC m=+0.090678520 container init 66a7a52cc882d4e69b9b5cebcd47712d4a4e578f43e039146ce466febd3c5d22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:53:17 compute-0 podman[249335]: 2025-11-26 11:53:17.47896887 +0000 UTC m=+0.094844181 container start 66a7a52cc882d4e69b9b5cebcd47712d4a4e578f43e039146ce466febd3c5d22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_roentgen, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 26 11:53:17 compute-0 podman[249335]: 2025-11-26 11:53:17.480510822 +0000 UTC m=+0.096386153 container attach 66a7a52cc882d4e69b9b5cebcd47712d4a4e578f43e039146ce466febd3c5d22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_roentgen, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 26 11:53:17 compute-0 podman[249335]: 2025-11-26 11:53:17.400319244 +0000 UTC m=+0.016194575 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:53:17 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v542: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:53:17 compute-0 ceph-mon[74928]: pgmap v542: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:53:18 compute-0 strange_roentgen[249348]: {
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:     "0": [
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:         {
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:             "devices": [
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:                 "/dev/loop3"
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:             ],
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:             "lv_name": "ceph_lv0",
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:             "lv_size": "21470642176",
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a9ad59a0-aa2e-4d92-b571-519d2d145b6a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:             "lv_uuid": "Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ",
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:             "name": "ceph_lv0",
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:             "tags": {
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:                 "ceph.block_uuid": "Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ",
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:                 "ceph.cluster_name": "ceph",
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:                 "ceph.crush_device_class": "",
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:                 "ceph.encrypted": "0",
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:                 "ceph.osd_fsid": "a9ad59a0-aa2e-4d92-b571-519d2d145b6a",
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:                 "ceph.osd_id": "0",
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:                 "ceph.type": "block",
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:                 "ceph.vdo": "0"
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:             },
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:             "type": "block",
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:             "vg_name": "ceph_vg0"
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:         }
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:     ],
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:     "1": [
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:         {
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:             "devices": [
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:                 "/dev/loop4"
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:             ],
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:             "lv_name": "ceph_lv1",
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:             "lv_size": "21470642176",
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2627095b-eef8-4027-bfef-68bf7cb6801f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:             "lv_uuid": "T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS",
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:             "name": "ceph_lv1",
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:             "tags": {
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:                 "ceph.block_uuid": "T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS",
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:                 "ceph.cluster_name": "ceph",
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:                 "ceph.crush_device_class": "",
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:                 "ceph.encrypted": "0",
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:                 "ceph.osd_fsid": "2627095b-eef8-4027-bfef-68bf7cb6801f",
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:                 "ceph.osd_id": "1",
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:                 "ceph.type": "block",
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:                 "ceph.vdo": "0"
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:             },
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:             "type": "block",
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:             "vg_name": "ceph_vg1"
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:         }
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:     ],
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:     "2": [
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:         {
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:             "devices": [
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:                 "/dev/loop5"
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:             ],
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:             "lv_name": "ceph_lv2",
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:             "lv_size": "21470642176",
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d56156fb-7361-4bef-b06b-1320109b4323,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:             "lv_uuid": "PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF",
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:             "name": "ceph_lv2",
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:             "tags": {
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:                 "ceph.block_uuid": "PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF",
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:                 "ceph.cluster_name": "ceph",
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:                 "ceph.crush_device_class": "",
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:                 "ceph.encrypted": "0",
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:                 "ceph.osd_fsid": "d56156fb-7361-4bef-b06b-1320109b4323",
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:                 "ceph.osd_id": "2",
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:                 "ceph.type": "block",
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:                 "ceph.vdo": "0"
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:             },
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:             "type": "block",
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:             "vg_name": "ceph_vg2"
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:         }
Nov 26 11:53:18 compute-0 strange_roentgen[249348]:     ]
Nov 26 11:53:18 compute-0 strange_roentgen[249348]: }
Nov 26 11:53:18 compute-0 systemd[1]: libpod-66a7a52cc882d4e69b9b5cebcd47712d4a4e578f43e039146ce466febd3c5d22.scope: Deactivated successfully.
Nov 26 11:53:18 compute-0 podman[249335]: 2025-11-26 11:53:18.112853146 +0000 UTC m=+0.728728477 container died 66a7a52cc882d4e69b9b5cebcd47712d4a4e578f43e039146ce466febd3c5d22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_roentgen, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 26 11:53:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-6512f5740c9365012196dd7e7df78a35faa881b6ee3016cab55362b31a252918-merged.mount: Deactivated successfully.
Nov 26 11:53:18 compute-0 podman[249335]: 2025-11-26 11:53:18.147625551 +0000 UTC m=+0.763500861 container remove 66a7a52cc882d4e69b9b5cebcd47712d4a4e578f43e039146ce466febd3c5d22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_roentgen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 26 11:53:18 compute-0 systemd[1]: libpod-conmon-66a7a52cc882d4e69b9b5cebcd47712d4a4e578f43e039146ce466febd3c5d22.scope: Deactivated successfully.
Nov 26 11:53:18 compute-0 sudo[249243]: pam_unix(sudo:session): session closed for user root
Nov 26 11:53:18 compute-0 sudo[249367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:53:18 compute-0 sudo[249367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:53:18 compute-0 sudo[249367]: pam_unix(sudo:session): session closed for user root
Nov 26 11:53:18 compute-0 sudo[249392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:53:18 compute-0 sudo[249392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:53:18 compute-0 sudo[249392]: pam_unix(sudo:session): session closed for user root
Nov 26 11:53:18 compute-0 sudo[249417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:53:18 compute-0 sudo[249417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:53:18 compute-0 sudo[249417]: pam_unix(sudo:session): session closed for user root
Nov 26 11:53:18 compute-0 sudo[249442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -- raw list --format json
Nov 26 11:53:18 compute-0 sudo[249442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:53:18 compute-0 podman[249498]: 2025-11-26 11:53:18.568840638 +0000 UTC m=+0.028435407 container create 808ba8679c1c2c60de7f71d4b0e3c6333216ef243944970b1dea5245dc04dd15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_easley, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 26 11:53:18 compute-0 systemd[1]: Started libpod-conmon-808ba8679c1c2c60de7f71d4b0e3c6333216ef243944970b1dea5245dc04dd15.scope.
Nov 26 11:53:18 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:53:18 compute-0 podman[249498]: 2025-11-26 11:53:18.610750527 +0000 UTC m=+0.070345305 container init 808ba8679c1c2c60de7f71d4b0e3c6333216ef243944970b1dea5245dc04dd15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:53:18 compute-0 podman[249498]: 2025-11-26 11:53:18.615607108 +0000 UTC m=+0.075201867 container start 808ba8679c1c2c60de7f71d4b0e3c6333216ef243944970b1dea5245dc04dd15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_easley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507)
Nov 26 11:53:18 compute-0 podman[249498]: 2025-11-26 11:53:18.616758335 +0000 UTC m=+0.076353095 container attach 808ba8679c1c2c60de7f71d4b0e3c6333216ef243944970b1dea5245dc04dd15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_easley, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:53:18 compute-0 angry_easley[249511]: 167 167
Nov 26 11:53:18 compute-0 systemd[1]: libpod-808ba8679c1c2c60de7f71d4b0e3c6333216ef243944970b1dea5245dc04dd15.scope: Deactivated successfully.
Nov 26 11:53:18 compute-0 conmon[249511]: conmon 808ba8679c1c2c60de7f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-808ba8679c1c2c60de7f71d4b0e3c6333216ef243944970b1dea5245dc04dd15.scope/container/memory.events
Nov 26 11:53:18 compute-0 podman[249498]: 2025-11-26 11:53:18.618729716 +0000 UTC m=+0.078324475 container died 808ba8679c1c2c60de7f71d4b0e3c6333216ef243944970b1dea5245dc04dd15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_easley, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:53:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-c553137c78ebf738bd0c8c8fb86a08ec178b7e7f39d939d59647cce5bc2acec0-merged.mount: Deactivated successfully.
Nov 26 11:53:18 compute-0 podman[249498]: 2025-11-26 11:53:18.636192768 +0000 UTC m=+0.095787527 container remove 808ba8679c1c2c60de7f71d4b0e3c6333216ef243944970b1dea5245dc04dd15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_easley, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 26 11:53:18 compute-0 podman[249498]: 2025-11-26 11:53:18.557043101 +0000 UTC m=+0.016637870 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:53:18 compute-0 systemd[1]: libpod-conmon-808ba8679c1c2c60de7f71d4b0e3c6333216ef243944970b1dea5245dc04dd15.scope: Deactivated successfully.
Nov 26 11:53:18 compute-0 podman[249532]: 2025-11-26 11:53:18.753607774 +0000 UTC m=+0.026994664 container create 9efb1d6c1348489118a08e9b9bee09a6811e7028a90fa34d484eff091d229e7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:53:18 compute-0 systemd[1]: Started libpod-conmon-9efb1d6c1348489118a08e9b9bee09a6811e7028a90fa34d484eff091d229e7f.scope.
Nov 26 11:53:18 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:53:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c257c91684b6117a5a7f5d6686bd35cd7120b3cc3badf63cea63f2106c043cc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:53:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c257c91684b6117a5a7f5d6686bd35cd7120b3cc3badf63cea63f2106c043cc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:53:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c257c91684b6117a5a7f5d6686bd35cd7120b3cc3badf63cea63f2106c043cc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:53:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c257c91684b6117a5a7f5d6686bd35cd7120b3cc3badf63cea63f2106c043cc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:53:18 compute-0 podman[249532]: 2025-11-26 11:53:18.826013228 +0000 UTC m=+0.099400128 container init 9efb1d6c1348489118a08e9b9bee09a6811e7028a90fa34d484eff091d229e7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_williamson, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:53:18 compute-0 podman[249532]: 2025-11-26 11:53:18.830942657 +0000 UTC m=+0.104329537 container start 9efb1d6c1348489118a08e9b9bee09a6811e7028a90fa34d484eff091d229e7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_williamson, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 26 11:53:18 compute-0 podman[249532]: 2025-11-26 11:53:18.832049691 +0000 UTC m=+0.105436571 container attach 9efb1d6c1348489118a08e9b9bee09a6811e7028a90fa34d484eff091d229e7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 26 11:53:18 compute-0 podman[249532]: 2025-11-26 11:53:18.742747432 +0000 UTC m=+0.016134332 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:53:19 compute-0 sweet_williamson[249545]: {
Nov 26 11:53:19 compute-0 sweet_williamson[249545]:     "2627095b-eef8-4027-bfef-68bf7cb6801f": {
Nov 26 11:53:19 compute-0 sweet_williamson[249545]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:53:19 compute-0 sweet_williamson[249545]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 11:53:19 compute-0 sweet_williamson[249545]:         "osd_id": 1,
Nov 26 11:53:19 compute-0 sweet_williamson[249545]:         "osd_uuid": "2627095b-eef8-4027-bfef-68bf7cb6801f",
Nov 26 11:53:19 compute-0 sweet_williamson[249545]:         "type": "bluestore"
Nov 26 11:53:19 compute-0 sweet_williamson[249545]:     },
Nov 26 11:53:19 compute-0 sweet_williamson[249545]:     "a9ad59a0-aa2e-4d92-b571-519d2d145b6a": {
Nov 26 11:53:19 compute-0 sweet_williamson[249545]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:53:19 compute-0 sweet_williamson[249545]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 11:53:19 compute-0 sweet_williamson[249545]:         "osd_id": 0,
Nov 26 11:53:19 compute-0 sweet_williamson[249545]:         "osd_uuid": "a9ad59a0-aa2e-4d92-b571-519d2d145b6a",
Nov 26 11:53:19 compute-0 sweet_williamson[249545]:         "type": "bluestore"
Nov 26 11:53:19 compute-0 sweet_williamson[249545]:     },
Nov 26 11:53:19 compute-0 sweet_williamson[249545]:     "d56156fb-7361-4bef-b06b-1320109b4323": {
Nov 26 11:53:19 compute-0 sweet_williamson[249545]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:53:19 compute-0 sweet_williamson[249545]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 11:53:19 compute-0 sweet_williamson[249545]:         "osd_id": 2,
Nov 26 11:53:19 compute-0 sweet_williamson[249545]:         "osd_uuid": "d56156fb-7361-4bef-b06b-1320109b4323",
Nov 26 11:53:19 compute-0 sweet_williamson[249545]:         "type": "bluestore"
Nov 26 11:53:19 compute-0 sweet_williamson[249545]:     }
Nov 26 11:53:19 compute-0 sweet_williamson[249545]: }
Nov 26 11:53:19 compute-0 systemd[1]: libpod-9efb1d6c1348489118a08e9b9bee09a6811e7028a90fa34d484eff091d229e7f.scope: Deactivated successfully.
Nov 26 11:53:19 compute-0 podman[249532]: 2025-11-26 11:53:19.582030411 +0000 UTC m=+0.855417291 container died 9efb1d6c1348489118a08e9b9bee09a6811e7028a90fa34d484eff091d229e7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_williamson, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:53:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c257c91684b6117a5a7f5d6686bd35cd7120b3cc3badf63cea63f2106c043cc-merged.mount: Deactivated successfully.
Nov 26 11:53:19 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v543: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:53:19 compute-0 podman[249532]: 2025-11-26 11:53:19.612584605 +0000 UTC m=+0.885971484 container remove 9efb1d6c1348489118a08e9b9bee09a6811e7028a90fa34d484eff091d229e7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_williamson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 26 11:53:19 compute-0 systemd[1]: libpod-conmon-9efb1d6c1348489118a08e9b9bee09a6811e7028a90fa34d484eff091d229e7f.scope: Deactivated successfully.
Nov 26 11:53:19 compute-0 sudo[249442]: pam_unix(sudo:session): session closed for user root
Nov 26 11:53:19 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 11:53:19 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:53:19 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 11:53:19 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:53:19 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev a419924f-67ee-4341-b4b8-bb8d8cc07d53 does not exist
Nov 26 11:53:19 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 68cb272a-b34f-4398-8e06-1c89f3875c1d does not exist
Nov 26 11:53:19 compute-0 sudo[249588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:53:19 compute-0 sudo[249588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:53:19 compute-0 sudo[249588]: pam_unix(sudo:session): session closed for user root
Nov 26 11:53:19 compute-0 sudo[249613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 26 11:53:19 compute-0 sudo[249613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:53:19 compute-0 sudo[249613]: pam_unix(sudo:session): session closed for user root
Nov 26 11:53:20 compute-0 ceph-mon[74928]: pgmap v543: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:53:20 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:53:20 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:53:21 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:53:21 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v544: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:53:22 compute-0 ceph-mon[74928]: pgmap v544: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:53:23 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v545: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:53:24 compute-0 ceph-mon[74928]: pgmap v545: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:53:25 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v546: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:53:26 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:53:26 compute-0 ceph-mon[74928]: pgmap v546: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:53:27 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v547: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:53:28 compute-0 ceph-mon[74928]: pgmap v547: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:53:29 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v548: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:53:30 compute-0 ceph-mon[74928]: pgmap v548: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:53:31 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:53:31 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v549: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:53:32 compute-0 ceph-mon[74928]: pgmap v549: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:53:33 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v550: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:53:34 compute-0 ceph-mon[74928]: pgmap v550: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:53:35 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v551: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:53:35 compute-0 podman[249638]: 2025-11-26 11:53:35.622111816 +0000 UTC m=+0.039613377 container health_status b46e4f533fc070f3c487ba2bec68d1eecba141ffd4a1dcc9b4c347110dd51e0e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 26 11:53:36 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:53:36 compute-0 ceph-mon[74928]: pgmap v551: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:53:37 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v552: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:53:38 compute-0 podman[249658]: 2025-11-26 11:53:38.613184019 +0000 UTC m=+0.038318931 container health_status 5f401b83ca5f86ed33da7d010b1da1561564f07ce95b2e756c93a36d59e54803 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 26 11:53:38 compute-0 ceph-mon[74928]: pgmap v552: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:53:39 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v553: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:53:40 compute-0 ceph-mon[74928]: pgmap v553: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:53:41 compute-0 ceph-mgr[75197]: [balancer INFO root] Optimize plan auto_2025-11-26_11:53:41
Nov 26 11:53:41 compute-0 ceph-mgr[75197]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 11:53:41 compute-0 ceph-mgr[75197]: [balancer INFO root] do_upmap
Nov 26 11:53:41 compute-0 ceph-mgr[75197]: [balancer INFO root] pools ['volumes', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.meta', 'images', '.rgw.root', 'backups', '.mgr', 'vms', 'default.rgw.log', 'cephfs.cephfs.data']
Nov 26 11:53:41 compute-0 ceph-mgr[75197]: [balancer INFO root] prepared 0/10 changes
Nov 26 11:53:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:53:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:53:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:53:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:53:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:53:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:53:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 11:53:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 11:53:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 11:53:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 11:53:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 11:53:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 11:53:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 11:53:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 11:53:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 11:53:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 11:53:41 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:53:41 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v554: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:53:42 compute-0 ceph-mon[74928]: pgmap v554: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:53:43 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v555: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:53:44 compute-0 ceph-mon[74928]: pgmap v555: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:53:45 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v556: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:53:46 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:53:46 compute-0 ceph-mon[74928]: pgmap v556: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:53:47 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v557: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:53:47 compute-0 podman[249675]: 2025-11-26 11:53:47.629322101 +0000 UTC m=+0.053603202 container health_status cfbeb1268b73bea59211ec8c025ee3818e11127880f6c0675b5fbea39bdd577e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:53:48 compute-0 ceph-mon[74928]: pgmap v557: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:53:49 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v558: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:53:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 11:53:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:53:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 11:53:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:53:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:53:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:53:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:53:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:53:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:53:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:53:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:53:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:53:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 26 11:53:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:53:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:53:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:53:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 11:53:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:53:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 11:53:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:53:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:53:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:53:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 11:53:50 compute-0 ceph-mon[74928]: pgmap v558: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:53:51 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:53:51 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v559: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:53:52 compute-0 ceph-mon[74928]: pgmap v559: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:53:53 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v560: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:53:54 compute-0 ceph-mon[74928]: pgmap v560: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:53:55 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v561: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:53:56 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:53:56 compute-0 ceph-mon[74928]: pgmap v561: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:53:57 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v562: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:53:58 compute-0 ceph-mon[74928]: pgmap v562: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:53:59 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v563: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:54:00 compute-0 ceph-mon[74928]: pgmap v563: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:54:01 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:54:01 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v564: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:54:02 compute-0 ceph-mon[74928]: pgmap v564: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:54:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:54:02.986 159928 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 11:54:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:54:02.987 159928 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 11:54:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:54:02.987 159928 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 11:54:03 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v565: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:54:04 compute-0 ceph-mon[74928]: pgmap v565: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:54:05 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:54:05.196 159928 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:66:7a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '96:09:f9:2f:d1:50'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 26 11:54:05 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:54:05.197 159928 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 26 11:54:05 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:54:05.198 159928 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52e0423b-b2d6-4490-a138-5f72d3aa5a2d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 26 11:54:05 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v566: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:54:06 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:54:06 compute-0 podman[249698]: 2025-11-26 11:54:06.613264501 +0000 UTC m=+0.037697692 container health_status b46e4f533fc070f3c487ba2bec68d1eecba141ffd4a1dcc9b4c347110dd51e0e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 26 11:54:06 compute-0 ceph-mon[74928]: pgmap v566: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:54:07 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v567: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:54:08 compute-0 ceph-mon[74928]: pgmap v567: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:54:09 compute-0 podman[249715]: 2025-11-26 11:54:09.609156896 +0000 UTC m=+0.035241038 container health_status 5f401b83ca5f86ed33da7d010b1da1561564f07ce95b2e756c93a36d59e54803 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 26 11:54:09 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v568: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:54:10 compute-0 ceph-mon[74928]: pgmap v568: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:54:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:54:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:54:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:54:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:54:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:54:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:54:11 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:54:11 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v569: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:54:12 compute-0 ceph-mon[74928]: pgmap v569: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:54:13 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v570: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:54:14 compute-0 ceph-mon[74928]: pgmap v570: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:54:15 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v571: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:54:16 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:54:16 compute-0 nova_compute[248203]: 2025-11-26 11:54:16.667 248207 DEBUG oslo_service.periodic_task [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 11:54:16 compute-0 nova_compute[248203]: 2025-11-26 11:54:16.667 248207 DEBUG oslo_service.periodic_task [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 11:54:16 compute-0 nova_compute[248203]: 2025-11-26 11:54:16.697 248207 DEBUG oslo_service.periodic_task [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 11:54:16 compute-0 nova_compute[248203]: 2025-11-26 11:54:16.698 248207 DEBUG oslo_service.periodic_task [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 11:54:16 compute-0 nova_compute[248203]: 2025-11-26 11:54:16.698 248207 DEBUG oslo_service.periodic_task [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 11:54:16 compute-0 nova_compute[248203]: 2025-11-26 11:54:16.698 248207 DEBUG nova.compute.manager [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 26 11:54:16 compute-0 ceph-mon[74928]: pgmap v571: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:54:17 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v572: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:54:17 compute-0 nova_compute[248203]: 2025-11-26 11:54:17.625 248207 DEBUG oslo_service.periodic_task [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 11:54:17 compute-0 nova_compute[248203]: 2025-11-26 11:54:17.625 248207 DEBUG nova.compute.manager [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 26 11:54:17 compute-0 nova_compute[248203]: 2025-11-26 11:54:17.625 248207 DEBUG nova.compute.manager [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 26 11:54:17 compute-0 nova_compute[248203]: 2025-11-26 11:54:17.646 248207 DEBUG nova.compute.manager [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 26 11:54:17 compute-0 nova_compute[248203]: 2025-11-26 11:54:17.647 248207 DEBUG oslo_service.periodic_task [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 11:54:17 compute-0 nova_compute[248203]: 2025-11-26 11:54:17.647 248207 DEBUG oslo_service.periodic_task [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 11:54:17 compute-0 nova_compute[248203]: 2025-11-26 11:54:17.647 248207 DEBUG oslo_service.periodic_task [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 11:54:17 compute-0 nova_compute[248203]: 2025-11-26 11:54:17.647 248207 DEBUG oslo_service.periodic_task [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 11:54:17 compute-0 nova_compute[248203]: 2025-11-26 11:54:17.669 248207 DEBUG oslo_concurrency.lockutils [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 11:54:17 compute-0 nova_compute[248203]: 2025-11-26 11:54:17.669 248207 DEBUG oslo_concurrency.lockutils [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 11:54:17 compute-0 nova_compute[248203]: 2025-11-26 11:54:17.669 248207 DEBUG oslo_concurrency.lockutils [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 11:54:17 compute-0 nova_compute[248203]: 2025-11-26 11:54:17.669 248207 DEBUG nova.compute.resource_tracker [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 26 11:54:17 compute-0 nova_compute[248203]: 2025-11-26 11:54:17.669 248207 DEBUG oslo_concurrency.processutils [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 26 11:54:17 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 11:54:17 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1370728024' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 11:54:17 compute-0 nova_compute[248203]: 2025-11-26 11:54:17.989 248207 DEBUG oslo_concurrency.processutils [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.320s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 26 11:54:18 compute-0 nova_compute[248203]: 2025-11-26 11:54:18.176 248207 WARNING nova.virt.libvirt.driver [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 26 11:54:18 compute-0 nova_compute[248203]: 2025-11-26 11:54:18.177 248207 DEBUG nova.compute.resource_tracker [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5224MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 26 11:54:18 compute-0 nova_compute[248203]: 2025-11-26 11:54:18.177 248207 DEBUG oslo_concurrency.lockutils [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 11:54:18 compute-0 nova_compute[248203]: 2025-11-26 11:54:18.177 248207 DEBUG oslo_concurrency.lockutils [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 11:54:18 compute-0 nova_compute[248203]: 2025-11-26 11:54:18.232 248207 DEBUG nova.compute.resource_tracker [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 26 11:54:18 compute-0 nova_compute[248203]: 2025-11-26 11:54:18.233 248207 DEBUG nova.compute.resource_tracker [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 26 11:54:18 compute-0 nova_compute[248203]: 2025-11-26 11:54:18.252 248207 DEBUG oslo_concurrency.processutils [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 26 11:54:18 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 11:54:18 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3617935729' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 11:54:18 compute-0 nova_compute[248203]: 2025-11-26 11:54:18.574 248207 DEBUG oslo_concurrency.processutils [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.323s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 26 11:54:18 compute-0 nova_compute[248203]: 2025-11-26 11:54:18.578 248207 DEBUG nova.compute.provider_tree [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Inventory has not changed in ProviderTree for provider: ffdf5b8d-24ca-43b0-a64a-b7345874e7b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 26 11:54:18 compute-0 nova_compute[248203]: 2025-11-26 11:54:18.593 248207 DEBUG nova.scheduler.client.report [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Inventory has not changed for provider ffdf5b8d-24ca-43b0-a64a-b7345874e7b4 based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 26 11:54:18 compute-0 nova_compute[248203]: 2025-11-26 11:54:18.594 248207 DEBUG nova.compute.resource_tracker [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 26 11:54:18 compute-0 nova_compute[248203]: 2025-11-26 11:54:18.594 248207 DEBUG oslo_concurrency.lockutils [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.417s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 11:54:18 compute-0 podman[249773]: 2025-11-26 11:54:18.631161241 +0000 UTC m=+0.056579042 container health_status cfbeb1268b73bea59211ec8c025ee3818e11127880f6c0675b5fbea39bdd577e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller)
Nov 26 11:54:18 compute-0 ceph-mon[74928]: pgmap v572: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:54:18 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1370728024' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 11:54:18 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3617935729' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 11:54:19 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v573: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:54:19 compute-0 sudo[249798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:54:19 compute-0 sudo[249798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:54:19 compute-0 sudo[249798]: pam_unix(sudo:session): session closed for user root
Nov 26 11:54:19 compute-0 sudo[249823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:54:19 compute-0 sudo[249823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:54:19 compute-0 sudo[249823]: pam_unix(sudo:session): session closed for user root
Nov 26 11:54:19 compute-0 sudo[249848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:54:19 compute-0 sudo[249848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:54:19 compute-0 sudo[249848]: pam_unix(sudo:session): session closed for user root
Nov 26 11:54:19 compute-0 sudo[249873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 26 11:54:19 compute-0 sudo[249873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:54:20 compute-0 sudo[249873]: pam_unix(sudo:session): session closed for user root
Nov 26 11:54:20 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 11:54:20 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:54:20 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 11:54:20 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 11:54:20 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 11:54:20 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:54:20 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 82285549-4962-4dd0-a151-7c22461fef8d does not exist
Nov 26 11:54:20 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev bde5a609-b29d-41aa-82d0-ee8616dcee76 does not exist
Nov 26 11:54:20 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev a837d34d-0188-45f4-b258-38cb5e90e888 does not exist
Nov 26 11:54:20 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 11:54:20 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 11:54:20 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 11:54:20 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 11:54:20 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 11:54:20 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:54:20 compute-0 sudo[249927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:54:20 compute-0 sudo[249927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:54:20 compute-0 sudo[249927]: pam_unix(sudo:session): session closed for user root
Nov 26 11:54:20 compute-0 sudo[249952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:54:20 compute-0 sudo[249952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:54:20 compute-0 sudo[249952]: pam_unix(sudo:session): session closed for user root
Nov 26 11:54:20 compute-0 sudo[249977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:54:20 compute-0 sudo[249977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:54:20 compute-0 sudo[249977]: pam_unix(sudo:session): session closed for user root
Nov 26 11:54:20 compute-0 sudo[250002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 26 11:54:20 compute-0 sudo[250002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:54:20 compute-0 podman[250056]: 2025-11-26 11:54:20.638138307 +0000 UTC m=+0.026949168 container create 3457d21033c71f4289d7b73b5d93c24d48ef056987a419096a4add37416de895 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:54:20 compute-0 systemd[1]: Started libpod-conmon-3457d21033c71f4289d7b73b5d93c24d48ef056987a419096a4add37416de895.scope.
Nov 26 11:54:20 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:54:20 compute-0 podman[250056]: 2025-11-26 11:54:20.700584562 +0000 UTC m=+0.089395424 container init 3457d21033c71f4289d7b73b5d93c24d48ef056987a419096a4add37416de895 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:54:20 compute-0 podman[250056]: 2025-11-26 11:54:20.70500325 +0000 UTC m=+0.093814102 container start 3457d21033c71f4289d7b73b5d93c24d48ef056987a419096a4add37416de895 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_zhukovsky, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:54:20 compute-0 podman[250056]: 2025-11-26 11:54:20.706056332 +0000 UTC m=+0.094867195 container attach 3457d21033c71f4289d7b73b5d93c24d48ef056987a419096a4add37416de895 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_zhukovsky, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:54:20 compute-0 compassionate_zhukovsky[250069]: 167 167
Nov 26 11:54:20 compute-0 systemd[1]: libpod-3457d21033c71f4289d7b73b5d93c24d48ef056987a419096a4add37416de895.scope: Deactivated successfully.
Nov 26 11:54:20 compute-0 conmon[250069]: conmon 3457d21033c71f4289d7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3457d21033c71f4289d7b73b5d93c24d48ef056987a419096a4add37416de895.scope/container/memory.events
Nov 26 11:54:20 compute-0 podman[250056]: 2025-11-26 11:54:20.709156529 +0000 UTC m=+0.097967391 container died 3457d21033c71f4289d7b73b5d93c24d48ef056987a419096a4add37416de895 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_zhukovsky, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:54:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-06298ca92ccb81cd31ff2c2dccbee4a238434e394d9f092239820c94d598ef32-merged.mount: Deactivated successfully.
Nov 26 11:54:20 compute-0 podman[250056]: 2025-11-26 11:54:20.627662697 +0000 UTC m=+0.016473580 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:54:20 compute-0 podman[250056]: 2025-11-26 11:54:20.728178885 +0000 UTC m=+0.116989748 container remove 3457d21033c71f4289d7b73b5d93c24d48ef056987a419096a4add37416de895 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 26 11:54:20 compute-0 systemd[1]: libpod-conmon-3457d21033c71f4289d7b73b5d93c24d48ef056987a419096a4add37416de895.scope: Deactivated successfully.
Nov 26 11:54:20 compute-0 ceph-mon[74928]: pgmap v573: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:54:20 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:54:20 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 11:54:20 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:54:20 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 11:54:20 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 11:54:20 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:54:20 compute-0 podman[250091]: 2025-11-26 11:54:20.845573013 +0000 UTC m=+0.026589822 container create 634a26c84c4de4ec757d9cad7d4606ce43e1e14ae386ef4a57711e8bbd58cf9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:54:20 compute-0 systemd[1]: Started libpod-conmon-634a26c84c4de4ec757d9cad7d4606ce43e1e14ae386ef4a57711e8bbd58cf9b.scope.
Nov 26 11:54:20 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:54:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a85d450cd92e81db18e928a58891bfdd1fe581af7b8069a7a1452ed9d30b4451/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:54:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a85d450cd92e81db18e928a58891bfdd1fe581af7b8069a7a1452ed9d30b4451/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:54:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a85d450cd92e81db18e928a58891bfdd1fe581af7b8069a7a1452ed9d30b4451/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:54:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a85d450cd92e81db18e928a58891bfdd1fe581af7b8069a7a1452ed9d30b4451/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:54:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a85d450cd92e81db18e928a58891bfdd1fe581af7b8069a7a1452ed9d30b4451/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 11:54:20 compute-0 podman[250091]: 2025-11-26 11:54:20.899692875 +0000 UTC m=+0.080709695 container init 634a26c84c4de4ec757d9cad7d4606ce43e1e14ae386ef4a57711e8bbd58cf9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_agnesi, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 26 11:54:20 compute-0 podman[250091]: 2025-11-26 11:54:20.907837235 +0000 UTC m=+0.088854046 container start 634a26c84c4de4ec757d9cad7d4606ce43e1e14ae386ef4a57711e8bbd58cf9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 26 11:54:20 compute-0 podman[250091]: 2025-11-26 11:54:20.909033578 +0000 UTC m=+0.090050388 container attach 634a26c84c4de4ec757d9cad7d4606ce43e1e14ae386ef4a57711e8bbd58cf9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 26 11:54:20 compute-0 podman[250091]: 2025-11-26 11:54:20.834944216 +0000 UTC m=+0.015961046 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:54:21 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:54:21 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v574: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:54:21 compute-0 bold_agnesi[250104]: --> passed data devices: 0 physical, 3 LVM
Nov 26 11:54:21 compute-0 bold_agnesi[250104]: --> relative data size: 1.0
Nov 26 11:54:21 compute-0 bold_agnesi[250104]: --> All data devices are unavailable
Nov 26 11:54:21 compute-0 systemd[1]: libpod-634a26c84c4de4ec757d9cad7d4606ce43e1e14ae386ef4a57711e8bbd58cf9b.scope: Deactivated successfully.
Nov 26 11:54:21 compute-0 podman[250091]: 2025-11-26 11:54:21.70844798 +0000 UTC m=+0.889464780 container died 634a26c84c4de4ec757d9cad7d4606ce43e1e14ae386ef4a57711e8bbd58cf9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_agnesi, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:54:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-a85d450cd92e81db18e928a58891bfdd1fe581af7b8069a7a1452ed9d30b4451-merged.mount: Deactivated successfully.
Nov 26 11:54:21 compute-0 podman[250091]: 2025-11-26 11:54:21.740484405 +0000 UTC m=+0.921501215 container remove 634a26c84c4de4ec757d9cad7d4606ce43e1e14ae386ef4a57711e8bbd58cf9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_agnesi, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:54:21 compute-0 systemd[1]: libpod-conmon-634a26c84c4de4ec757d9cad7d4606ce43e1e14ae386ef4a57711e8bbd58cf9b.scope: Deactivated successfully.
Nov 26 11:54:21 compute-0 sudo[250002]: pam_unix(sudo:session): session closed for user root
Nov 26 11:54:21 compute-0 sudo[250143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:54:21 compute-0 sudo[250143]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:54:21 compute-0 sudo[250143]: pam_unix(sudo:session): session closed for user root
Nov 26 11:54:21 compute-0 sudo[250168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:54:21 compute-0 sudo[250168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:54:21 compute-0 sudo[250168]: pam_unix(sudo:session): session closed for user root
Nov 26 11:54:21 compute-0 sudo[250193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:54:21 compute-0 sudo[250193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:54:21 compute-0 sudo[250193]: pam_unix(sudo:session): session closed for user root
Nov 26 11:54:21 compute-0 sudo[250218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -- lvm list --format json
Nov 26 11:54:21 compute-0 sudo[250218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:54:22 compute-0 podman[250273]: 2025-11-26 11:54:22.168628454 +0000 UTC m=+0.027352017 container create 79f67e5dba608949a2f1e4d852bcf5351245c5adda715f8d8f50a24c3fd153b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_ganguly, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:54:22 compute-0 systemd[1]: Started libpod-conmon-79f67e5dba608949a2f1e4d852bcf5351245c5adda715f8d8f50a24c3fd153b6.scope.
Nov 26 11:54:22 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:54:22 compute-0 podman[250273]: 2025-11-26 11:54:22.222279074 +0000 UTC m=+0.081002637 container init 79f67e5dba608949a2f1e4d852bcf5351245c5adda715f8d8f50a24c3fd153b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_ganguly, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:54:22 compute-0 podman[250273]: 2025-11-26 11:54:22.226450415 +0000 UTC m=+0.085173969 container start 79f67e5dba608949a2f1e4d852bcf5351245c5adda715f8d8f50a24c3fd153b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Nov 26 11:54:22 compute-0 podman[250273]: 2025-11-26 11:54:22.227644032 +0000 UTC m=+0.086367586 container attach 79f67e5dba608949a2f1e4d852bcf5351245c5adda715f8d8f50a24c3fd153b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:54:22 compute-0 naughty_ganguly[250287]: 167 167
Nov 26 11:54:22 compute-0 systemd[1]: libpod-79f67e5dba608949a2f1e4d852bcf5351245c5adda715f8d8f50a24c3fd153b6.scope: Deactivated successfully.
Nov 26 11:54:22 compute-0 conmon[250287]: conmon 79f67e5dba608949a2f1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-79f67e5dba608949a2f1e4d852bcf5351245c5adda715f8d8f50a24c3fd153b6.scope/container/memory.events
Nov 26 11:54:22 compute-0 podman[250273]: 2025-11-26 11:54:22.230738337 +0000 UTC m=+0.089461890 container died 79f67e5dba608949a2f1e4d852bcf5351245c5adda715f8d8f50a24c3fd153b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_ganguly, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:54:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-21fa8e9153337e7febc9f39e6c6702d4e1484d3110f34259d84e682713a32d94-merged.mount: Deactivated successfully.
Nov 26 11:54:22 compute-0 podman[250273]: 2025-11-26 11:54:22.250098139 +0000 UTC m=+0.108821682 container remove 79f67e5dba608949a2f1e4d852bcf5351245c5adda715f8d8f50a24c3fd153b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_ganguly, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 26 11:54:22 compute-0 podman[250273]: 2025-11-26 11:54:22.156750335 +0000 UTC m=+0.015473888 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:54:22 compute-0 systemd[1]: libpod-conmon-79f67e5dba608949a2f1e4d852bcf5351245c5adda715f8d8f50a24c3fd153b6.scope: Deactivated successfully.
Nov 26 11:54:22 compute-0 podman[250309]: 2025-11-26 11:54:22.369509524 +0000 UTC m=+0.028181469 container create 079a40d02e0c1181849ae432538320108796263c070ffcb62bab7655f2948829 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_raman, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 26 11:54:22 compute-0 systemd[1]: Started libpod-conmon-079a40d02e0c1181849ae432538320108796263c070ffcb62bab7655f2948829.scope.
Nov 26 11:54:22 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:54:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dc49c50b6069fac8e42129ecf12b52fe97bd2302d10386d8cf687ae15faa884/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:54:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dc49c50b6069fac8e42129ecf12b52fe97bd2302d10386d8cf687ae15faa884/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:54:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dc49c50b6069fac8e42129ecf12b52fe97bd2302d10386d8cf687ae15faa884/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:54:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dc49c50b6069fac8e42129ecf12b52fe97bd2302d10386d8cf687ae15faa884/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:54:22 compute-0 podman[250309]: 2025-11-26 11:54:22.424245246 +0000 UTC m=+0.082917201 container init 079a40d02e0c1181849ae432538320108796263c070ffcb62bab7655f2948829 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_raman, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 26 11:54:22 compute-0 podman[250309]: 2025-11-26 11:54:22.429832543 +0000 UTC m=+0.088504468 container start 079a40d02e0c1181849ae432538320108796263c070ffcb62bab7655f2948829 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 26 11:54:22 compute-0 podman[250309]: 2025-11-26 11:54:22.430898801 +0000 UTC m=+0.089570735 container attach 079a40d02e0c1181849ae432538320108796263c070ffcb62bab7655f2948829 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_raman, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:54:22 compute-0 podman[250309]: 2025-11-26 11:54:22.35788376 +0000 UTC m=+0.016555715 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:54:22 compute-0 ceph-mon[74928]: pgmap v574: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:54:23 compute-0 agitated_raman[250322]: {
Nov 26 11:54:23 compute-0 agitated_raman[250322]:     "0": [
Nov 26 11:54:23 compute-0 agitated_raman[250322]:         {
Nov 26 11:54:23 compute-0 agitated_raman[250322]:             "devices": [
Nov 26 11:54:23 compute-0 agitated_raman[250322]:                 "/dev/loop3"
Nov 26 11:54:23 compute-0 agitated_raman[250322]:             ],
Nov 26 11:54:23 compute-0 agitated_raman[250322]:             "lv_name": "ceph_lv0",
Nov 26 11:54:23 compute-0 agitated_raman[250322]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:54:23 compute-0 agitated_raman[250322]:             "lv_size": "21470642176",
Nov 26 11:54:23 compute-0 agitated_raman[250322]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a9ad59a0-aa2e-4d92-b571-519d2d145b6a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:54:23 compute-0 agitated_raman[250322]:             "lv_uuid": "Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ",
Nov 26 11:54:23 compute-0 agitated_raman[250322]:             "name": "ceph_lv0",
Nov 26 11:54:23 compute-0 agitated_raman[250322]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:54:23 compute-0 agitated_raman[250322]:             "tags": {
Nov 26 11:54:23 compute-0 agitated_raman[250322]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:54:23 compute-0 agitated_raman[250322]:                 "ceph.block_uuid": "Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ",
Nov 26 11:54:23 compute-0 agitated_raman[250322]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:54:23 compute-0 agitated_raman[250322]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:54:23 compute-0 agitated_raman[250322]:                 "ceph.cluster_name": "ceph",
Nov 26 11:54:23 compute-0 agitated_raman[250322]:                 "ceph.crush_device_class": "",
Nov 26 11:54:23 compute-0 agitated_raman[250322]:                 "ceph.encrypted": "0",
Nov 26 11:54:23 compute-0 agitated_raman[250322]:                 "ceph.osd_fsid": "a9ad59a0-aa2e-4d92-b571-519d2d145b6a",
Nov 26 11:54:23 compute-0 agitated_raman[250322]:                 "ceph.osd_id": "0",
Nov 26 11:54:23 compute-0 agitated_raman[250322]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:54:23 compute-0 agitated_raman[250322]:                 "ceph.type": "block",
Nov 26 11:54:23 compute-0 agitated_raman[250322]:                 "ceph.vdo": "0"
Nov 26 11:54:23 compute-0 agitated_raman[250322]:             },
Nov 26 11:54:23 compute-0 agitated_raman[250322]:             "type": "block",
Nov 26 11:54:23 compute-0 agitated_raman[250322]:             "vg_name": "ceph_vg0"
Nov 26 11:54:23 compute-0 agitated_raman[250322]:         }
Nov 26 11:54:23 compute-0 agitated_raman[250322]:     ],
Nov 26 11:54:23 compute-0 agitated_raman[250322]:     "1": [
Nov 26 11:54:23 compute-0 agitated_raman[250322]:         {
Nov 26 11:54:23 compute-0 agitated_raman[250322]:             "devices": [
Nov 26 11:54:23 compute-0 agitated_raman[250322]:                 "/dev/loop4"
Nov 26 11:54:23 compute-0 agitated_raman[250322]:             ],
Nov 26 11:54:23 compute-0 agitated_raman[250322]:             "lv_name": "ceph_lv1",
Nov 26 11:54:23 compute-0 agitated_raman[250322]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:54:23 compute-0 agitated_raman[250322]:             "lv_size": "21470642176",
Nov 26 11:54:23 compute-0 agitated_raman[250322]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2627095b-eef8-4027-bfef-68bf7cb6801f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:54:23 compute-0 agitated_raman[250322]:             "lv_uuid": "T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS",
Nov 26 11:54:23 compute-0 agitated_raman[250322]:             "name": "ceph_lv1",
Nov 26 11:54:23 compute-0 agitated_raman[250322]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:54:23 compute-0 agitated_raman[250322]:             "tags": {
Nov 26 11:54:23 compute-0 agitated_raman[250322]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:54:23 compute-0 agitated_raman[250322]:                 "ceph.block_uuid": "T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS",
Nov 26 11:54:23 compute-0 agitated_raman[250322]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:54:23 compute-0 agitated_raman[250322]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:54:23 compute-0 agitated_raman[250322]:                 "ceph.cluster_name": "ceph",
Nov 26 11:54:23 compute-0 agitated_raman[250322]:                 "ceph.crush_device_class": "",
Nov 26 11:54:23 compute-0 agitated_raman[250322]:                 "ceph.encrypted": "0",
Nov 26 11:54:23 compute-0 agitated_raman[250322]:                 "ceph.osd_fsid": "2627095b-eef8-4027-bfef-68bf7cb6801f",
Nov 26 11:54:23 compute-0 agitated_raman[250322]:                 "ceph.osd_id": "1",
Nov 26 11:54:23 compute-0 agitated_raman[250322]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:54:23 compute-0 agitated_raman[250322]:                 "ceph.type": "block",
Nov 26 11:54:23 compute-0 agitated_raman[250322]:                 "ceph.vdo": "0"
Nov 26 11:54:23 compute-0 agitated_raman[250322]:             },
Nov 26 11:54:23 compute-0 agitated_raman[250322]:             "type": "block",
Nov 26 11:54:23 compute-0 agitated_raman[250322]:             "vg_name": "ceph_vg1"
Nov 26 11:54:23 compute-0 agitated_raman[250322]:         }
Nov 26 11:54:23 compute-0 agitated_raman[250322]:     ],
Nov 26 11:54:23 compute-0 agitated_raman[250322]:     "2": [
Nov 26 11:54:23 compute-0 agitated_raman[250322]:         {
Nov 26 11:54:23 compute-0 agitated_raman[250322]:             "devices": [
Nov 26 11:54:23 compute-0 agitated_raman[250322]:                 "/dev/loop5"
Nov 26 11:54:23 compute-0 agitated_raman[250322]:             ],
Nov 26 11:54:23 compute-0 agitated_raman[250322]:             "lv_name": "ceph_lv2",
Nov 26 11:54:23 compute-0 agitated_raman[250322]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:54:23 compute-0 agitated_raman[250322]:             "lv_size": "21470642176",
Nov 26 11:54:23 compute-0 agitated_raman[250322]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d56156fb-7361-4bef-b06b-1320109b4323,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:54:23 compute-0 agitated_raman[250322]:             "lv_uuid": "PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF",
Nov 26 11:54:23 compute-0 agitated_raman[250322]:             "name": "ceph_lv2",
Nov 26 11:54:23 compute-0 agitated_raman[250322]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:54:23 compute-0 agitated_raman[250322]:             "tags": {
Nov 26 11:54:23 compute-0 agitated_raman[250322]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:54:23 compute-0 agitated_raman[250322]:                 "ceph.block_uuid": "PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF",
Nov 26 11:54:23 compute-0 agitated_raman[250322]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:54:23 compute-0 agitated_raman[250322]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:54:23 compute-0 agitated_raman[250322]:                 "ceph.cluster_name": "ceph",
Nov 26 11:54:23 compute-0 agitated_raman[250322]:                 "ceph.crush_device_class": "",
Nov 26 11:54:23 compute-0 agitated_raman[250322]:                 "ceph.encrypted": "0",
Nov 26 11:54:23 compute-0 agitated_raman[250322]:                 "ceph.osd_fsid": "d56156fb-7361-4bef-b06b-1320109b4323",
Nov 26 11:54:23 compute-0 agitated_raman[250322]:                 "ceph.osd_id": "2",
Nov 26 11:54:23 compute-0 agitated_raman[250322]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:54:23 compute-0 agitated_raman[250322]:                 "ceph.type": "block",
Nov 26 11:54:23 compute-0 agitated_raman[250322]:                 "ceph.vdo": "0"
Nov 26 11:54:23 compute-0 agitated_raman[250322]:             },
Nov 26 11:54:23 compute-0 agitated_raman[250322]:             "type": "block",
Nov 26 11:54:23 compute-0 agitated_raman[250322]:             "vg_name": "ceph_vg2"
Nov 26 11:54:23 compute-0 agitated_raman[250322]:         }
Nov 26 11:54:23 compute-0 agitated_raman[250322]:     ]
Nov 26 11:54:23 compute-0 agitated_raman[250322]: }
Nov 26 11:54:23 compute-0 systemd[1]: libpod-079a40d02e0c1181849ae432538320108796263c070ffcb62bab7655f2948829.scope: Deactivated successfully.
Nov 26 11:54:23 compute-0 podman[250309]: 2025-11-26 11:54:23.064724195 +0000 UTC m=+0.723396130 container died 079a40d02e0c1181849ae432538320108796263c070ffcb62bab7655f2948829 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_raman, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 26 11:54:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-6dc49c50b6069fac8e42129ecf12b52fe97bd2302d10386d8cf687ae15faa884-merged.mount: Deactivated successfully.
Nov 26 11:54:23 compute-0 podman[250309]: 2025-11-26 11:54:23.096885694 +0000 UTC m=+0.755557629 container remove 079a40d02e0c1181849ae432538320108796263c070ffcb62bab7655f2948829 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_raman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 26 11:54:23 compute-0 systemd[1]: libpod-conmon-079a40d02e0c1181849ae432538320108796263c070ffcb62bab7655f2948829.scope: Deactivated successfully.
Nov 26 11:54:23 compute-0 sudo[250218]: pam_unix(sudo:session): session closed for user root
Nov 26 11:54:23 compute-0 sudo[250340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:54:23 compute-0 sudo[250340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:54:23 compute-0 sudo[250340]: pam_unix(sudo:session): session closed for user root
Nov 26 11:54:23 compute-0 sudo[250365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:54:23 compute-0 sudo[250365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:54:23 compute-0 sudo[250365]: pam_unix(sudo:session): session closed for user root
Nov 26 11:54:23 compute-0 sudo[250390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:54:23 compute-0 sudo[250390]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:54:23 compute-0 sudo[250390]: pam_unix(sudo:session): session closed for user root
Nov 26 11:54:23 compute-0 sudo[250415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -- raw list --format json
Nov 26 11:54:23 compute-0 sudo[250415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:54:23 compute-0 podman[250470]: 2025-11-26 11:54:23.517281469 +0000 UTC m=+0.026975900 container create 239caa42045440f061e47229d160e44699477a36b8a9f52a0b3212db9692e6cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_kilby, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:54:23 compute-0 systemd[1]: Started libpod-conmon-239caa42045440f061e47229d160e44699477a36b8a9f52a0b3212db9692e6cb.scope.
Nov 26 11:54:23 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:54:23 compute-0 podman[250470]: 2025-11-26 11:54:23.559592223 +0000 UTC m=+0.069286673 container init 239caa42045440f061e47229d160e44699477a36b8a9f52a0b3212db9692e6cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_kilby, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:54:23 compute-0 podman[250470]: 2025-11-26 11:54:23.564193225 +0000 UTC m=+0.073887654 container start 239caa42045440f061e47229d160e44699477a36b8a9f52a0b3212db9692e6cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_kilby, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:54:23 compute-0 blissful_kilby[250483]: 167 167
Nov 26 11:54:23 compute-0 systemd[1]: libpod-239caa42045440f061e47229d160e44699477a36b8a9f52a0b3212db9692e6cb.scope: Deactivated successfully.
Nov 26 11:54:23 compute-0 podman[250470]: 2025-11-26 11:54:23.567738457 +0000 UTC m=+0.077432907 container attach 239caa42045440f061e47229d160e44699477a36b8a9f52a0b3212db9692e6cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 26 11:54:23 compute-0 podman[250470]: 2025-11-26 11:54:23.568329109 +0000 UTC m=+0.078023540 container died 239caa42045440f061e47229d160e44699477a36b8a9f52a0b3212db9692e6cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_kilby, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:54:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-f89dae45a6fa64e482d215274b55a385a85a84f038cf1250439de257dc150ed5-merged.mount: Deactivated successfully.
Nov 26 11:54:23 compute-0 podman[250470]: 2025-11-26 11:54:23.587503232 +0000 UTC m=+0.097197662 container remove 239caa42045440f061e47229d160e44699477a36b8a9f52a0b3212db9692e6cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507)
Nov 26 11:54:23 compute-0 podman[250470]: 2025-11-26 11:54:23.505686002 +0000 UTC m=+0.015380453 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:54:23 compute-0 systemd[1]: libpod-conmon-239caa42045440f061e47229d160e44699477a36b8a9f52a0b3212db9692e6cb.scope: Deactivated successfully.
Nov 26 11:54:23 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v575: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:54:23 compute-0 podman[250506]: 2025-11-26 11:54:23.704417386 +0000 UTC m=+0.027253682 container create d1d2ca038fe2970d2e47f7948e4ba24520f403cb55e0ef240be41049cdf00454 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_curran, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 26 11:54:23 compute-0 systemd[1]: Started libpod-conmon-d1d2ca038fe2970d2e47f7948e4ba24520f403cb55e0ef240be41049cdf00454.scope.
Nov 26 11:54:23 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:54:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5261de8f180410b599f4cef94b5167897e54ef66be10353b3a50dff3021fd54e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:54:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5261de8f180410b599f4cef94b5167897e54ef66be10353b3a50dff3021fd54e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:54:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5261de8f180410b599f4cef94b5167897e54ef66be10353b3a50dff3021fd54e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:54:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5261de8f180410b599f4cef94b5167897e54ef66be10353b3a50dff3021fd54e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:54:23 compute-0 podman[250506]: 2025-11-26 11:54:23.759268635 +0000 UTC m=+0.082104951 container init d1d2ca038fe2970d2e47f7948e4ba24520f403cb55e0ef240be41049cdf00454 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 26 11:54:23 compute-0 podman[250506]: 2025-11-26 11:54:23.764464295 +0000 UTC m=+0.087300581 container start d1d2ca038fe2970d2e47f7948e4ba24520f403cb55e0ef240be41049cdf00454 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_curran, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Nov 26 11:54:23 compute-0 podman[250506]: 2025-11-26 11:54:23.765522337 +0000 UTC m=+0.088358633 container attach d1d2ca038fe2970d2e47f7948e4ba24520f403cb55e0ef240be41049cdf00454 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 26 11:54:23 compute-0 podman[250506]: 2025-11-26 11:54:23.693161589 +0000 UTC m=+0.015997905 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:54:24 compute-0 gifted_curran[250520]: {
Nov 26 11:54:24 compute-0 gifted_curran[250520]:     "2627095b-eef8-4027-bfef-68bf7cb6801f": {
Nov 26 11:54:24 compute-0 gifted_curran[250520]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:54:24 compute-0 gifted_curran[250520]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 11:54:24 compute-0 gifted_curran[250520]:         "osd_id": 1,
Nov 26 11:54:24 compute-0 gifted_curran[250520]:         "osd_uuid": "2627095b-eef8-4027-bfef-68bf7cb6801f",
Nov 26 11:54:24 compute-0 gifted_curran[250520]:         "type": "bluestore"
Nov 26 11:54:24 compute-0 gifted_curran[250520]:     },
Nov 26 11:54:24 compute-0 gifted_curran[250520]:     "a9ad59a0-aa2e-4d92-b571-519d2d145b6a": {
Nov 26 11:54:24 compute-0 gifted_curran[250520]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:54:24 compute-0 gifted_curran[250520]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 11:54:24 compute-0 gifted_curran[250520]:         "osd_id": 0,
Nov 26 11:54:24 compute-0 gifted_curran[250520]:         "osd_uuid": "a9ad59a0-aa2e-4d92-b571-519d2d145b6a",
Nov 26 11:54:24 compute-0 gifted_curran[250520]:         "type": "bluestore"
Nov 26 11:54:24 compute-0 gifted_curran[250520]:     },
Nov 26 11:54:24 compute-0 gifted_curran[250520]:     "d56156fb-7361-4bef-b06b-1320109b4323": {
Nov 26 11:54:24 compute-0 gifted_curran[250520]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:54:24 compute-0 gifted_curran[250520]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 11:54:24 compute-0 gifted_curran[250520]:         "osd_id": 2,
Nov 26 11:54:24 compute-0 gifted_curran[250520]:         "osd_uuid": "d56156fb-7361-4bef-b06b-1320109b4323",
Nov 26 11:54:24 compute-0 gifted_curran[250520]:         "type": "bluestore"
Nov 26 11:54:24 compute-0 gifted_curran[250520]:     }
Nov 26 11:54:24 compute-0 gifted_curran[250520]: }
Nov 26 11:54:24 compute-0 systemd[1]: libpod-d1d2ca038fe2970d2e47f7948e4ba24520f403cb55e0ef240be41049cdf00454.scope: Deactivated successfully.
Nov 26 11:54:24 compute-0 podman[250553]: 2025-11-26 11:54:24.546415986 +0000 UTC m=+0.015979590 container died d1d2ca038fe2970d2e47f7948e4ba24520f403cb55e0ef240be41049cdf00454 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_curran, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 26 11:54:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-5261de8f180410b599f4cef94b5167897e54ef66be10353b3a50dff3021fd54e-merged.mount: Deactivated successfully.
Nov 26 11:54:24 compute-0 podman[250553]: 2025-11-26 11:54:24.577440366 +0000 UTC m=+0.047003949 container remove d1d2ca038fe2970d2e47f7948e4ba24520f403cb55e0ef240be41049cdf00454 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_curran, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 26 11:54:24 compute-0 systemd[1]: libpod-conmon-d1d2ca038fe2970d2e47f7948e4ba24520f403cb55e0ef240be41049cdf00454.scope: Deactivated successfully.
Nov 26 11:54:24 compute-0 sudo[250415]: pam_unix(sudo:session): session closed for user root
Nov 26 11:54:24 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 11:54:24 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:54:24 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 11:54:24 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:54:24 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 89284bc8-bbd9-49d0-8141-ce3c42a6ff55 does not exist
Nov 26 11:54:24 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 1fe3efc7-0c20-4f7e-9a5d-211e5d62a262 does not exist
Nov 26 11:54:24 compute-0 sudo[250564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:54:24 compute-0 sudo[250564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:54:24 compute-0 sudo[250564]: pam_unix(sudo:session): session closed for user root
Nov 26 11:54:24 compute-0 sudo[250589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 26 11:54:24 compute-0 sudo[250589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:54:24 compute-0 sudo[250589]: pam_unix(sudo:session): session closed for user root
Nov 26 11:54:24 compute-0 ceph-mon[74928]: pgmap v575: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:54:24 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:54:24 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:54:25 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v576: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:54:26 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:54:26 compute-0 ceph-mon[74928]: pgmap v576: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:54:27 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v577: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:54:28 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 11:54:28 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3764512076' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 11:54:28 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 11:54:28 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3764512076' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 11:54:28 compute-0 ceph-mon[74928]: pgmap v577: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:54:28 compute-0 ceph-mon[74928]: from='client.? 192.168.122.10:0/3764512076' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 11:54:28 compute-0 ceph-mon[74928]: from='client.? 192.168.122.10:0/3764512076' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 11:54:29 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v578: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:54:30 compute-0 ceph-mon[74928]: pgmap v578: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:54:31 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:54:31 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v579: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:54:32 compute-0 ceph-mon[74928]: pgmap v579: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:54:33 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v580: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:54:34 compute-0 ceph-mon[74928]: pgmap v580: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:54:35 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v581: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:54:36 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:54:36 compute-0 ceph-mon[74928]: pgmap v581: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:54:37 compute-0 podman[250614]: 2025-11-26 11:54:37.624177201 +0000 UTC m=+0.046399852 container health_status b46e4f533fc070f3c487ba2bec68d1eecba141ffd4a1dcc9b4c347110dd51e0e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 26 11:54:37 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v582: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:54:38 compute-0 ceph-mon[74928]: pgmap v582: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:54:39 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v583: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:54:40 compute-0 podman[250630]: 2025-11-26 11:54:40.611453599 +0000 UTC m=+0.037713482 container health_status 5f401b83ca5f86ed33da7d010b1da1561564f07ce95b2e756c93a36d59e54803 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 26 11:54:40 compute-0 ceph-mon[74928]: pgmap v583: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:54:41 compute-0 ceph-mgr[75197]: [balancer INFO root] Optimize plan auto_2025-11-26_11:54:41
Nov 26 11:54:41 compute-0 ceph-mgr[75197]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 11:54:41 compute-0 ceph-mgr[75197]: [balancer INFO root] do_upmap
Nov 26 11:54:41 compute-0 ceph-mgr[75197]: [balancer INFO root] pools ['.rgw.root', 'backups', 'default.rgw.log', 'default.rgw.control', '.mgr', 'vms', 'cephfs.cephfs.data', 'images', 'default.rgw.meta', 'cephfs.cephfs.meta', 'volumes']
Nov 26 11:54:41 compute-0 ceph-mgr[75197]: [balancer INFO root] prepared 0/10 changes
Nov 26 11:54:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:54:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:54:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:54:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:54:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:54:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:54:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 11:54:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 11:54:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 11:54:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 11:54:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 11:54:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 11:54:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 11:54:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 11:54:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 11:54:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 11:54:41 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:54:41 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v584: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:54:42 compute-0 ceph-mon[74928]: pgmap v584: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:54:43 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v585: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:54:44 compute-0 ceph-mon[74928]: pgmap v585: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:54:45 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v586: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:54:46 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:54:46 compute-0 ceph-mon[74928]: pgmap v586: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:54:47 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v587: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:54:48 compute-0 ceph-mon[74928]: pgmap v587: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:54:49 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v588: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:54:49 compute-0 podman[250645]: 2025-11-26 11:54:49.633117852 +0000 UTC m=+0.059461208 container health_status cfbeb1268b73bea59211ec8c025ee3818e11127880f6c0675b5fbea39bdd577e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Nov 26 11:54:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 11:54:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:54:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 11:54:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:54:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:54:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:54:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:54:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:54:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:54:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:54:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:54:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:54:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 26 11:54:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:54:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:54:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:54:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 11:54:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:54:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 11:54:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:54:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:54:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:54:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 11:54:50 compute-0 ceph-mon[74928]: pgmap v588: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:54:51 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:54:51 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v589: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:54:52 compute-0 ceph-mon[74928]: pgmap v589: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:54:53 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v590: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:54:54 compute-0 ceph-mon[74928]: pgmap v590: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:54:55 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v591: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:54:56 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:54:56 compute-0 ceph-mon[74928]: pgmap v591: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:54:57 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v592: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:54:58 compute-0 ceph-mon[74928]: pgmap v592: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:54:59 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v593: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:55:00 compute-0 ceph-mon[74928]: pgmap v593: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:55:01 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:55:01 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v594: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:55:02 compute-0 ceph-mon[74928]: pgmap v594: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:55:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:55:02.986 159928 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 11:55:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:55:02.987 159928 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 11:55:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:55:02.987 159928 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 11:55:03 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v595: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:55:04 compute-0 ceph-mon[74928]: pgmap v595: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:55:05 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v596: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:55:06 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:55:06 compute-0 ceph-mon[74928]: pgmap v596: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:55:07 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v597: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:55:08 compute-0 podman[250669]: 2025-11-26 11:55:08.614110236 +0000 UTC m=+0.040660997 container health_status b46e4f533fc070f3c487ba2bec68d1eecba141ffd4a1dcc9b4c347110dd51e0e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 26 11:55:08 compute-0 ceph-mon[74928]: pgmap v597: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:55:09 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v598: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:55:10 compute-0 ceph-mon[74928]: pgmap v598: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:55:10 compute-0 podman[250686]: 2025-11-26 11:55:10.928087525 +0000 UTC m=+0.033656820 container health_status 5f401b83ca5f86ed33da7d010b1da1561564f07ce95b2e756c93a36d59e54803 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:55:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:55:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:55:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:55:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:55:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:55:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:55:11 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:55:11 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v599: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:55:12 compute-0 ceph-mon[74928]: pgmap v599: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:55:13 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v600: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:55:14 compute-0 ceph-mon[74928]: pgmap v600: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:55:15 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v601: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:55:16 compute-0 nova_compute[248203]: 2025-11-26 11:55:16.573 248207 DEBUG oslo_service.periodic_task [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 11:55:16 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:55:16 compute-0 nova_compute[248203]: 2025-11-26 11:55:16.625 248207 DEBUG oslo_service.periodic_task [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 11:55:16 compute-0 nova_compute[248203]: 2025-11-26 11:55:16.625 248207 DEBUG nova.compute.manager [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 26 11:55:16 compute-0 ceph-mon[74928]: pgmap v601: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:55:17 compute-0 nova_compute[248203]: 2025-11-26 11:55:17.621 248207 DEBUG oslo_service.periodic_task [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 11:55:17 compute-0 nova_compute[248203]: 2025-11-26 11:55:17.625 248207 DEBUG oslo_service.periodic_task [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 11:55:17 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v602: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:55:17 compute-0 nova_compute[248203]: 2025-11-26 11:55:17.649 248207 DEBUG oslo_concurrency.lockutils [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 11:55:17 compute-0 nova_compute[248203]: 2025-11-26 11:55:17.650 248207 DEBUG oslo_concurrency.lockutils [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 11:55:17 compute-0 nova_compute[248203]: 2025-11-26 11:55:17.650 248207 DEBUG oslo_concurrency.lockutils [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 11:55:17 compute-0 nova_compute[248203]: 2025-11-26 11:55:17.650 248207 DEBUG nova.compute.resource_tracker [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 26 11:55:17 compute-0 nova_compute[248203]: 2025-11-26 11:55:17.650 248207 DEBUG oslo_concurrency.processutils [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 26 11:55:17 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 11:55:17 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1696667946' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 11:55:17 compute-0 nova_compute[248203]: 2025-11-26 11:55:17.967 248207 DEBUG oslo_concurrency.processutils [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.317s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 26 11:55:18 compute-0 nova_compute[248203]: 2025-11-26 11:55:18.149 248207 WARNING nova.virt.libvirt.driver [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 26 11:55:18 compute-0 nova_compute[248203]: 2025-11-26 11:55:18.150 248207 DEBUG nova.compute.resource_tracker [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5210MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 26 11:55:18 compute-0 nova_compute[248203]: 2025-11-26 11:55:18.150 248207 DEBUG oslo_concurrency.lockutils [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 11:55:18 compute-0 nova_compute[248203]: 2025-11-26 11:55:18.150 248207 DEBUG oslo_concurrency.lockutils [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 11:55:18 compute-0 nova_compute[248203]: 2025-11-26 11:55:18.200 248207 DEBUG nova.compute.resource_tracker [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 26 11:55:18 compute-0 nova_compute[248203]: 2025-11-26 11:55:18.200 248207 DEBUG nova.compute.resource_tracker [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 26 11:55:18 compute-0 nova_compute[248203]: 2025-11-26 11:55:18.214 248207 DEBUG oslo_concurrency.processutils [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 26 11:55:18 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 11:55:18 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/40316367' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 11:55:18 compute-0 nova_compute[248203]: 2025-11-26 11:55:18.529 248207 DEBUG oslo_concurrency.processutils [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.315s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 26 11:55:18 compute-0 nova_compute[248203]: 2025-11-26 11:55:18.532 248207 DEBUG nova.compute.provider_tree [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Inventory has not changed in ProviderTree for provider: ffdf5b8d-24ca-43b0-a64a-b7345874e7b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 26 11:55:18 compute-0 nova_compute[248203]: 2025-11-26 11:55:18.545 248207 DEBUG nova.scheduler.client.report [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Inventory has not changed for provider ffdf5b8d-24ca-43b0-a64a-b7345874e7b4 based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 26 11:55:18 compute-0 nova_compute[248203]: 2025-11-26 11:55:18.546 248207 DEBUG nova.compute.resource_tracker [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 26 11:55:18 compute-0 nova_compute[248203]: 2025-11-26 11:55:18.547 248207 DEBUG oslo_concurrency.lockutils [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.396s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 11:55:18 compute-0 ceph-mon[74928]: pgmap v602: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:55:18 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1696667946' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 11:55:18 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/40316367' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 11:55:19 compute-0 nova_compute[248203]: 2025-11-26 11:55:19.548 248207 DEBUG oslo_service.periodic_task [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 11:55:19 compute-0 nova_compute[248203]: 2025-11-26 11:55:19.548 248207 DEBUG nova.compute.manager [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 26 11:55:19 compute-0 nova_compute[248203]: 2025-11-26 11:55:19.548 248207 DEBUG nova.compute.manager [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 26 11:55:19 compute-0 nova_compute[248203]: 2025-11-26 11:55:19.562 248207 DEBUG nova.compute.manager [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 26 11:55:19 compute-0 nova_compute[248203]: 2025-11-26 11:55:19.562 248207 DEBUG oslo_service.periodic_task [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 11:55:19 compute-0 nova_compute[248203]: 2025-11-26 11:55:19.562 248207 DEBUG oslo_service.periodic_task [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 11:55:19 compute-0 nova_compute[248203]: 2025-11-26 11:55:19.625 248207 DEBUG oslo_service.periodic_task [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 11:55:19 compute-0 nova_compute[248203]: 2025-11-26 11:55:19.625 248207 DEBUG oslo_service.periodic_task [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 11:55:19 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v603: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:55:20 compute-0 podman[250746]: 2025-11-26 11:55:20.628235763 +0000 UTC m=+0.053907409 container health_status cfbeb1268b73bea59211ec8c025ee3818e11127880f6c0675b5fbea39bdd577e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118)
Nov 26 11:55:20 compute-0 ceph-mon[74928]: pgmap v603: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:55:21 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:55:21 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v604: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:55:21 compute-0 ceph-mon[74928]: pgmap v604: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:55:23 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v605: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:55:24 compute-0 ceph-mon[74928]: pgmap v605: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:55:24 compute-0 sudo[250769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:55:24 compute-0 sudo[250769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:55:24 compute-0 sudo[250769]: pam_unix(sudo:session): session closed for user root
Nov 26 11:55:24 compute-0 sudo[250794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:55:24 compute-0 sudo[250794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:55:24 compute-0 sudo[250794]: pam_unix(sudo:session): session closed for user root
Nov 26 11:55:24 compute-0 sudo[250819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:55:24 compute-0 sudo[250819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:55:24 compute-0 sudo[250819]: pam_unix(sudo:session): session closed for user root
Nov 26 11:55:24 compute-0 sudo[250844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 26 11:55:24 compute-0 sudo[250844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:55:25 compute-0 sudo[250844]: pam_unix(sudo:session): session closed for user root
Nov 26 11:55:25 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 11:55:25 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:55:25 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 11:55:25 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 11:55:25 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 11:55:25 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:55:25 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 98649a97-ae65-41a6-b24e-067a245eb78b does not exist
Nov 26 11:55:25 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev a083c678-eccc-49e5-ac24-db9f20fd5eba does not exist
Nov 26 11:55:25 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 63e36e68-5be5-4f39-b33a-251c81da075b does not exist
Nov 26 11:55:25 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 11:55:25 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 11:55:25 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 11:55:25 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 11:55:25 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 11:55:25 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:55:25 compute-0 sudo[250898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:55:25 compute-0 sudo[250898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:55:25 compute-0 sudo[250898]: pam_unix(sudo:session): session closed for user root
Nov 26 11:55:25 compute-0 sudo[250923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:55:25 compute-0 sudo[250923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:55:25 compute-0 sudo[250923]: pam_unix(sudo:session): session closed for user root
Nov 26 11:55:25 compute-0 sudo[250948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:55:25 compute-0 sudo[250948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:55:25 compute-0 sudo[250948]: pam_unix(sudo:session): session closed for user root
Nov 26 11:55:25 compute-0 sudo[250973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 26 11:55:25 compute-0 sudo[250973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:55:25 compute-0 podman[251028]: 2025-11-26 11:55:25.614506595 +0000 UTC m=+0.028388324 container create e0249ea59d50574508ebd1a031d3d7c70a3db63a9859f73dcd817a25d876dc8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 26 11:55:25 compute-0 systemd[1]: Started libpod-conmon-e0249ea59d50574508ebd1a031d3d7c70a3db63a9859f73dcd817a25d876dc8f.scope.
Nov 26 11:55:25 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v606: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:55:25 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:55:25 compute-0 podman[251028]: 2025-11-26 11:55:25.665057716 +0000 UTC m=+0.078939455 container init e0249ea59d50574508ebd1a031d3d7c70a3db63a9859f73dcd817a25d876dc8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 26 11:55:25 compute-0 podman[251028]: 2025-11-26 11:55:25.670580157 +0000 UTC m=+0.084461886 container start e0249ea59d50574508ebd1a031d3d7c70a3db63a9859f73dcd817a25d876dc8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_robinson, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Nov 26 11:55:25 compute-0 podman[251028]: 2025-11-26 11:55:25.671652297 +0000 UTC m=+0.085534027 container attach e0249ea59d50574508ebd1a031d3d7c70a3db63a9859f73dcd817a25d876dc8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_robinson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 26 11:55:25 compute-0 vigorous_robinson[251041]: 167 167
Nov 26 11:55:25 compute-0 systemd[1]: libpod-e0249ea59d50574508ebd1a031d3d7c70a3db63a9859f73dcd817a25d876dc8f.scope: Deactivated successfully.
Nov 26 11:55:25 compute-0 conmon[251041]: conmon e0249ea59d50574508eb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e0249ea59d50574508ebd1a031d3d7c70a3db63a9859f73dcd817a25d876dc8f.scope/container/memory.events
Nov 26 11:55:25 compute-0 podman[251028]: 2025-11-26 11:55:25.675050202 +0000 UTC m=+0.088931932 container died e0249ea59d50574508ebd1a031d3d7c70a3db63a9859f73dcd817a25d876dc8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_robinson, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:55:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef0765146fb027c0cedcbe91e90bfd08d296c9931f4193c0150bdde38a79ffa3-merged.mount: Deactivated successfully.
Nov 26 11:55:25 compute-0 podman[251028]: 2025-11-26 11:55:25.696038727 +0000 UTC m=+0.109920456 container remove e0249ea59d50574508ebd1a031d3d7c70a3db63a9859f73dcd817a25d876dc8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:55:25 compute-0 podman[251028]: 2025-11-26 11:55:25.603272448 +0000 UTC m=+0.017154178 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:55:25 compute-0 systemd[1]: libpod-conmon-e0249ea59d50574508ebd1a031d3d7c70a3db63a9859f73dcd817a25d876dc8f.scope: Deactivated successfully.
Nov 26 11:55:25 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:55:25 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 11:55:25 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:55:25 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 11:55:25 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 11:55:25 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:55:25 compute-0 podman[251063]: 2025-11-26 11:55:25.814237107 +0000 UTC m=+0.027638340 container create b28f4adc025c746e492a81cb6668d9d0828b0dd8715239b4f584877dbcca39b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_tu, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:55:25 compute-0 systemd[1]: Started libpod-conmon-b28f4adc025c746e492a81cb6668d9d0828b0dd8715239b4f584877dbcca39b3.scope.
Nov 26 11:55:25 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:55:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7424caa680e648a7fcb5b478516a1c4efeb40d3c208d3ed44130856b6e009005/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:55:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7424caa680e648a7fcb5b478516a1c4efeb40d3c208d3ed44130856b6e009005/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:55:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7424caa680e648a7fcb5b478516a1c4efeb40d3c208d3ed44130856b6e009005/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:55:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7424caa680e648a7fcb5b478516a1c4efeb40d3c208d3ed44130856b6e009005/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:55:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7424caa680e648a7fcb5b478516a1c4efeb40d3c208d3ed44130856b6e009005/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 11:55:25 compute-0 podman[251063]: 2025-11-26 11:55:25.875414499 +0000 UTC m=+0.088815733 container init b28f4adc025c746e492a81cb6668d9d0828b0dd8715239b4f584877dbcca39b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_tu, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:55:25 compute-0 podman[251063]: 2025-11-26 11:55:25.883177763 +0000 UTC m=+0.096578997 container start b28f4adc025c746e492a81cb6668d9d0828b0dd8715239b4f584877dbcca39b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_tu, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 26 11:55:25 compute-0 podman[251063]: 2025-11-26 11:55:25.88439587 +0000 UTC m=+0.097797104 container attach b28f4adc025c746e492a81cb6668d9d0828b0dd8715239b4f584877dbcca39b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 26 11:55:25 compute-0 podman[251063]: 2025-11-26 11:55:25.803691688 +0000 UTC m=+0.017092942 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:55:26 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:55:26 compute-0 hopeful_tu[251076]: --> passed data devices: 0 physical, 3 LVM
Nov 26 11:55:26 compute-0 hopeful_tu[251076]: --> relative data size: 1.0
Nov 26 11:55:26 compute-0 hopeful_tu[251076]: --> All data devices are unavailable
Nov 26 11:55:26 compute-0 systemd[1]: libpod-b28f4adc025c746e492a81cb6668d9d0828b0dd8715239b4f584877dbcca39b3.scope: Deactivated successfully.
Nov 26 11:55:26 compute-0 conmon[251076]: conmon b28f4adc025c746e492a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b28f4adc025c746e492a81cb6668d9d0828b0dd8715239b4f584877dbcca39b3.scope/container/memory.events
Nov 26 11:55:26 compute-0 podman[251063]: 2025-11-26 11:55:26.686365243 +0000 UTC m=+0.899766477 container died b28f4adc025c746e492a81cb6668d9d0828b0dd8715239b4f584877dbcca39b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_tu, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 26 11:55:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-7424caa680e648a7fcb5b478516a1c4efeb40d3c208d3ed44130856b6e009005-merged.mount: Deactivated successfully.
Nov 26 11:55:26 compute-0 ceph-mon[74928]: pgmap v606: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:55:26 compute-0 podman[251063]: 2025-11-26 11:55:26.717444528 +0000 UTC m=+0.930845763 container remove b28f4adc025c746e492a81cb6668d9d0828b0dd8715239b4f584877dbcca39b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_tu, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:55:26 compute-0 systemd[1]: libpod-conmon-b28f4adc025c746e492a81cb6668d9d0828b0dd8715239b4f584877dbcca39b3.scope: Deactivated successfully.
Nov 26 11:55:26 compute-0 sudo[250973]: pam_unix(sudo:session): session closed for user root
Nov 26 11:55:26 compute-0 sudo[251115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:55:26 compute-0 sudo[251115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:55:26 compute-0 sudo[251115]: pam_unix(sudo:session): session closed for user root
Nov 26 11:55:26 compute-0 sudo[251140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:55:26 compute-0 sudo[251140]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:55:26 compute-0 sudo[251140]: pam_unix(sudo:session): session closed for user root
Nov 26 11:55:26 compute-0 sudo[251165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:55:26 compute-0 sudo[251165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:55:26 compute-0 sudo[251165]: pam_unix(sudo:session): session closed for user root
Nov 26 11:55:26 compute-0 sudo[251190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -- lvm list --format json
Nov 26 11:55:26 compute-0 sudo[251190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:55:27 compute-0 podman[251245]: 2025-11-26 11:55:27.129571197 +0000 UTC m=+0.027165800 container create 801d8993a818ae5079495f4f08aac277a14190fbe6a5c8481b371db8fe9027ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_goldwasser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 26 11:55:27 compute-0 systemd[1]: Started libpod-conmon-801d8993a818ae5079495f4f08aac277a14190fbe6a5c8481b371db8fe9027ee.scope.
Nov 26 11:55:27 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:55:27 compute-0 podman[251245]: 2025-11-26 11:55:27.181219237 +0000 UTC m=+0.078813849 container init 801d8993a818ae5079495f4f08aac277a14190fbe6a5c8481b371db8fe9027ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_goldwasser, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:55:27 compute-0 podman[251245]: 2025-11-26 11:55:27.185749486 +0000 UTC m=+0.083344089 container start 801d8993a818ae5079495f4f08aac277a14190fbe6a5c8481b371db8fe9027ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 26 11:55:27 compute-0 infallible_goldwasser[251258]: 167 167
Nov 26 11:55:27 compute-0 systemd[1]: libpod-801d8993a818ae5079495f4f08aac277a14190fbe6a5c8481b371db8fe9027ee.scope: Deactivated successfully.
Nov 26 11:55:27 compute-0 podman[251245]: 2025-11-26 11:55:27.188988662 +0000 UTC m=+0.086583294 container attach 801d8993a818ae5079495f4f08aac277a14190fbe6a5c8481b371db8fe9027ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_goldwasser, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:55:27 compute-0 conmon[251258]: conmon 801d8993a818ae507949 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-801d8993a818ae5079495f4f08aac277a14190fbe6a5c8481b371db8fe9027ee.scope/container/memory.events
Nov 26 11:55:27 compute-0 podman[251245]: 2025-11-26 11:55:27.189807336 +0000 UTC m=+0.087401939 container died 801d8993a818ae5079495f4f08aac277a14190fbe6a5c8481b371db8fe9027ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_goldwasser, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:55:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-d5bf10815d4921276db9dd7df99143073758c065c4d8076a430716ec7c8ef5e5-merged.mount: Deactivated successfully.
Nov 26 11:55:27 compute-0 podman[251245]: 2025-11-26 11:55:27.21021774 +0000 UTC m=+0.107812343 container remove 801d8993a818ae5079495f4f08aac277a14190fbe6a5c8481b371db8fe9027ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_goldwasser, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 26 11:55:27 compute-0 podman[251245]: 2025-11-26 11:55:27.117818523 +0000 UTC m=+0.015413145 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:55:27 compute-0 systemd[1]: libpod-conmon-801d8993a818ae5079495f4f08aac277a14190fbe6a5c8481b371db8fe9027ee.scope: Deactivated successfully.
Nov 26 11:55:27 compute-0 podman[251280]: 2025-11-26 11:55:27.327997361 +0000 UTC m=+0.028429903 container create 27546e690bf85c539d7d0ce17c3c2b81edc7b46da4df0dc4494933b64830626b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_dirac, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 26 11:55:27 compute-0 systemd[1]: Started libpod-conmon-27546e690bf85c539d7d0ce17c3c2b81edc7b46da4df0dc4494933b64830626b.scope.
Nov 26 11:55:27 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:55:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37daac2e349014fab0c64be4e85d0bac13385c7ab2fe146345521b3cbd5c87ad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:55:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37daac2e349014fab0c64be4e85d0bac13385c7ab2fe146345521b3cbd5c87ad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:55:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37daac2e349014fab0c64be4e85d0bac13385c7ab2fe146345521b3cbd5c87ad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:55:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37daac2e349014fab0c64be4e85d0bac13385c7ab2fe146345521b3cbd5c87ad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:55:27 compute-0 podman[251280]: 2025-11-26 11:55:27.373402001 +0000 UTC m=+0.073834543 container init 27546e690bf85c539d7d0ce17c3c2b81edc7b46da4df0dc4494933b64830626b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_dirac, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:55:27 compute-0 podman[251280]: 2025-11-26 11:55:27.378056505 +0000 UTC m=+0.078489047 container start 27546e690bf85c539d7d0ce17c3c2b81edc7b46da4df0dc4494933b64830626b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 26 11:55:27 compute-0 podman[251280]: 2025-11-26 11:55:27.379446455 +0000 UTC m=+0.079878998 container attach 27546e690bf85c539d7d0ce17c3c2b81edc7b46da4df0dc4494933b64830626b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_dirac, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:55:27 compute-0 podman[251280]: 2025-11-26 11:55:27.317485595 +0000 UTC m=+0.017918158 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:55:27 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v607: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:55:27 compute-0 frosty_dirac[251293]: {
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:     "0": [
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:         {
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:             "devices": [
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:                 "/dev/loop3"
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:             ],
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:             "lv_name": "ceph_lv0",
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:             "lv_size": "21470642176",
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a9ad59a0-aa2e-4d92-b571-519d2d145b6a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:             "lv_uuid": "Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ",
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:             "name": "ceph_lv0",
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:             "tags": {
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:                 "ceph.block_uuid": "Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ",
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:                 "ceph.cluster_name": "ceph",
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:                 "ceph.crush_device_class": "",
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:                 "ceph.encrypted": "0",
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:                 "ceph.osd_fsid": "a9ad59a0-aa2e-4d92-b571-519d2d145b6a",
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:                 "ceph.osd_id": "0",
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:                 "ceph.type": "block",
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:                 "ceph.vdo": "0"
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:             },
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:             "type": "block",
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:             "vg_name": "ceph_vg0"
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:         }
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:     ],
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:     "1": [
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:         {
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:             "devices": [
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:                 "/dev/loop4"
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:             ],
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:             "lv_name": "ceph_lv1",
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:             "lv_size": "21470642176",
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2627095b-eef8-4027-bfef-68bf7cb6801f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:             "lv_uuid": "T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS",
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:             "name": "ceph_lv1",
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:             "tags": {
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:                 "ceph.block_uuid": "T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS",
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:                 "ceph.cluster_name": "ceph",
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:                 "ceph.crush_device_class": "",
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:                 "ceph.encrypted": "0",
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:                 "ceph.osd_fsid": "2627095b-eef8-4027-bfef-68bf7cb6801f",
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:                 "ceph.osd_id": "1",
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:                 "ceph.type": "block",
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:                 "ceph.vdo": "0"
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:             },
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:             "type": "block",
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:             "vg_name": "ceph_vg1"
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:         }
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:     ],
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:     "2": [
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:         {
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:             "devices": [
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:                 "/dev/loop5"
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:             ],
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:             "lv_name": "ceph_lv2",
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:             "lv_size": "21470642176",
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d56156fb-7361-4bef-b06b-1320109b4323,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:             "lv_uuid": "PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF",
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:             "name": "ceph_lv2",
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:             "tags": {
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:                 "ceph.block_uuid": "PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF",
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:                 "ceph.cluster_name": "ceph",
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:                 "ceph.crush_device_class": "",
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:                 "ceph.encrypted": "0",
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:                 "ceph.osd_fsid": "d56156fb-7361-4bef-b06b-1320109b4323",
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:                 "ceph.osd_id": "2",
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:                 "ceph.type": "block",
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:                 "ceph.vdo": "0"
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:             },
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:             "type": "block",
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:             "vg_name": "ceph_vg2"
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:         }
Nov 26 11:55:27 compute-0 frosty_dirac[251293]:     ]
Nov 26 11:55:27 compute-0 frosty_dirac[251293]: }
Nov 26 11:55:28 compute-0 systemd[1]: libpod-27546e690bf85c539d7d0ce17c3c2b81edc7b46da4df0dc4494933b64830626b.scope: Deactivated successfully.
Nov 26 11:55:28 compute-0 podman[251302]: 2025-11-26 11:55:28.033382748 +0000 UTC m=+0.016726551 container died 27546e690bf85c539d7d0ce17c3c2b81edc7b46da4df0dc4494933b64830626b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:55:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-37daac2e349014fab0c64be4e85d0bac13385c7ab2fe146345521b3cbd5c87ad-merged.mount: Deactivated successfully.
Nov 26 11:55:28 compute-0 podman[251302]: 2025-11-26 11:55:28.064085364 +0000 UTC m=+0.047429157 container remove 27546e690bf85c539d7d0ce17c3c2b81edc7b46da4df0dc4494933b64830626b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 26 11:55:28 compute-0 systemd[1]: libpod-conmon-27546e690bf85c539d7d0ce17c3c2b81edc7b46da4df0dc4494933b64830626b.scope: Deactivated successfully.
Nov 26 11:55:28 compute-0 sudo[251190]: pam_unix(sudo:session): session closed for user root
Nov 26 11:55:28 compute-0 sudo[251314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:55:28 compute-0 sudo[251314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:55:28 compute-0 sudo[251314]: pam_unix(sudo:session): session closed for user root
Nov 26 11:55:28 compute-0 sudo[251339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:55:28 compute-0 sudo[251339]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:55:28 compute-0 sudo[251339]: pam_unix(sudo:session): session closed for user root
Nov 26 11:55:28 compute-0 sudo[251364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:55:28 compute-0 sudo[251364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:55:28 compute-0 sudo[251364]: pam_unix(sudo:session): session closed for user root
Nov 26 11:55:28 compute-0 sudo[251389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -- raw list --format json
Nov 26 11:55:28 compute-0 sudo[251389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:55:28 compute-0 podman[251443]: 2025-11-26 11:55:28.491224022 +0000 UTC m=+0.027235683 container create 179017a6d39cb2b5d5f072a1dffc6429077667042017f962d6dc9ae8a6cf8e53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cray, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 26 11:55:28 compute-0 systemd[1]: Started libpod-conmon-179017a6d39cb2b5d5f072a1dffc6429077667042017f962d6dc9ae8a6cf8e53.scope.
Nov 26 11:55:28 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:55:28 compute-0 podman[251443]: 2025-11-26 11:55:28.539775925 +0000 UTC m=+0.075787606 container init 179017a6d39cb2b5d5f072a1dffc6429077667042017f962d6dc9ae8a6cf8e53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cray, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 26 11:55:28 compute-0 podman[251443]: 2025-11-26 11:55:28.544681221 +0000 UTC m=+0.080692883 container start 179017a6d39cb2b5d5f072a1dffc6429077667042017f962d6dc9ae8a6cf8e53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:55:28 compute-0 podman[251443]: 2025-11-26 11:55:28.545823103 +0000 UTC m=+0.081834784 container attach 179017a6d39cb2b5d5f072a1dffc6429077667042017f962d6dc9ae8a6cf8e53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:55:28 compute-0 exciting_cray[251457]: 167 167
Nov 26 11:55:28 compute-0 systemd[1]: libpod-179017a6d39cb2b5d5f072a1dffc6429077667042017f962d6dc9ae8a6cf8e53.scope: Deactivated successfully.
Nov 26 11:55:28 compute-0 podman[251443]: 2025-11-26 11:55:28.54774711 +0000 UTC m=+0.083758771 container died 179017a6d39cb2b5d5f072a1dffc6429077667042017f962d6dc9ae8a6cf8e53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:55:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-d29a2675a10f629c9e25b1c10ef75dfad3dc2242201915cdd3e7c723060554ff-merged.mount: Deactivated successfully.
Nov 26 11:55:28 compute-0 podman[251443]: 2025-11-26 11:55:28.569180965 +0000 UTC m=+0.105192626 container remove 179017a6d39cb2b5d5f072a1dffc6429077667042017f962d6dc9ae8a6cf8e53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cray, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:55:28 compute-0 podman[251443]: 2025-11-26 11:55:28.480954172 +0000 UTC m=+0.016965853 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:55:28 compute-0 systemd[1]: libpod-conmon-179017a6d39cb2b5d5f072a1dffc6429077667042017f962d6dc9ae8a6cf8e53.scope: Deactivated successfully.
Nov 26 11:55:28 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 11:55:28 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/876911978' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 11:55:28 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 11:55:28 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/876911978' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 11:55:28 compute-0 podman[251479]: 2025-11-26 11:55:28.687846585 +0000 UTC m=+0.026385889 container create 362a5d0d487087d529c86b6ca08ef49c0d4fcedc9b2e451d872f56c82af111aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_easley, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 26 11:55:28 compute-0 ceph-mon[74928]: pgmap v607: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:55:28 compute-0 ceph-mon[74928]: from='client.? 192.168.122.10:0/876911978' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 11:55:28 compute-0 ceph-mon[74928]: from='client.? 192.168.122.10:0/876911978' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 11:55:28 compute-0 systemd[1]: Started libpod-conmon-362a5d0d487087d529c86b6ca08ef49c0d4fcedc9b2e451d872f56c82af111aa.scope.
Nov 26 11:55:28 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:55:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29963a89ba530154913cc489435b76b74cf66d391a7218950ddc236469074827/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:55:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29963a89ba530154913cc489435b76b74cf66d391a7218950ddc236469074827/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:55:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29963a89ba530154913cc489435b76b74cf66d391a7218950ddc236469074827/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:55:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29963a89ba530154913cc489435b76b74cf66d391a7218950ddc236469074827/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:55:28 compute-0 podman[251479]: 2025-11-26 11:55:28.740846142 +0000 UTC m=+0.079385456 container init 362a5d0d487087d529c86b6ca08ef49c0d4fcedc9b2e451d872f56c82af111aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:55:28 compute-0 podman[251479]: 2025-11-26 11:55:28.74600672 +0000 UTC m=+0.084546023 container start 362a5d0d487087d529c86b6ca08ef49c0d4fcedc9b2e451d872f56c82af111aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_easley, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 26 11:55:28 compute-0 podman[251479]: 2025-11-26 11:55:28.747247499 +0000 UTC m=+0.085786803 container attach 362a5d0d487087d529c86b6ca08ef49c0d4fcedc9b2e451d872f56c82af111aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_easley, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 26 11:55:28 compute-0 podman[251479]: 2025-11-26 11:55:28.677581505 +0000 UTC m=+0.016120828 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:55:29 compute-0 flamboyant_easley[251492]: {
Nov 26 11:55:29 compute-0 flamboyant_easley[251492]:     "2627095b-eef8-4027-bfef-68bf7cb6801f": {
Nov 26 11:55:29 compute-0 flamboyant_easley[251492]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:55:29 compute-0 flamboyant_easley[251492]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 11:55:29 compute-0 flamboyant_easley[251492]:         "osd_id": 1,
Nov 26 11:55:29 compute-0 flamboyant_easley[251492]:         "osd_uuid": "2627095b-eef8-4027-bfef-68bf7cb6801f",
Nov 26 11:55:29 compute-0 flamboyant_easley[251492]:         "type": "bluestore"
Nov 26 11:55:29 compute-0 flamboyant_easley[251492]:     },
Nov 26 11:55:29 compute-0 flamboyant_easley[251492]:     "a9ad59a0-aa2e-4d92-b571-519d2d145b6a": {
Nov 26 11:55:29 compute-0 flamboyant_easley[251492]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:55:29 compute-0 flamboyant_easley[251492]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 11:55:29 compute-0 flamboyant_easley[251492]:         "osd_id": 0,
Nov 26 11:55:29 compute-0 flamboyant_easley[251492]:         "osd_uuid": "a9ad59a0-aa2e-4d92-b571-519d2d145b6a",
Nov 26 11:55:29 compute-0 flamboyant_easley[251492]:         "type": "bluestore"
Nov 26 11:55:29 compute-0 flamboyant_easley[251492]:     },
Nov 26 11:55:29 compute-0 flamboyant_easley[251492]:     "d56156fb-7361-4bef-b06b-1320109b4323": {
Nov 26 11:55:29 compute-0 flamboyant_easley[251492]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:55:29 compute-0 flamboyant_easley[251492]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 11:55:29 compute-0 flamboyant_easley[251492]:         "osd_id": 2,
Nov 26 11:55:29 compute-0 flamboyant_easley[251492]:         "osd_uuid": "d56156fb-7361-4bef-b06b-1320109b4323",
Nov 26 11:55:29 compute-0 flamboyant_easley[251492]:         "type": "bluestore"
Nov 26 11:55:29 compute-0 flamboyant_easley[251492]:     }
Nov 26 11:55:29 compute-0 flamboyant_easley[251492]: }
Nov 26 11:55:29 compute-0 systemd[1]: libpod-362a5d0d487087d529c86b6ca08ef49c0d4fcedc9b2e451d872f56c82af111aa.scope: Deactivated successfully.
Nov 26 11:55:29 compute-0 conmon[251492]: conmon 362a5d0d487087d529c8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-362a5d0d487087d529c86b6ca08ef49c0d4fcedc9b2e451d872f56c82af111aa.scope/container/memory.events
Nov 26 11:55:29 compute-0 podman[251525]: 2025-11-26 11:55:29.521368415 +0000 UTC m=+0.016603078 container died 362a5d0d487087d529c86b6ca08ef49c0d4fcedc9b2e451d872f56c82af111aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 26 11:55:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-29963a89ba530154913cc489435b76b74cf66d391a7218950ddc236469074827-merged.mount: Deactivated successfully.
Nov 26 11:55:29 compute-0 podman[251525]: 2025-11-26 11:55:29.549389095 +0000 UTC m=+0.044623758 container remove 362a5d0d487087d529c86b6ca08ef49c0d4fcedc9b2e451d872f56c82af111aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_easley, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 26 11:55:29 compute-0 systemd[1]: libpod-conmon-362a5d0d487087d529c86b6ca08ef49c0d4fcedc9b2e451d872f56c82af111aa.scope: Deactivated successfully.
Nov 26 11:55:29 compute-0 sudo[251389]: pam_unix(sudo:session): session closed for user root
Nov 26 11:55:29 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 11:55:29 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:55:29 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 11:55:29 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:55:29 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev b8a9ffd7-0e4f-4c21-a734-15aab716f75c does not exist
Nov 26 11:55:29 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 19ae436b-f81e-4f22-8495-6505a3bb7592 does not exist
Nov 26 11:55:29 compute-0 sudo[251537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:55:29 compute-0 sudo[251537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:55:29 compute-0 sudo[251537]: pam_unix(sudo:session): session closed for user root
Nov 26 11:55:29 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v608: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:55:29 compute-0 sudo[251562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 26 11:55:29 compute-0 sudo[251562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:55:29 compute-0 sudo[251562]: pam_unix(sudo:session): session closed for user root
Nov 26 11:55:30 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:55:30 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:55:30 compute-0 ceph-mon[74928]: pgmap v608: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:55:31 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:55:31 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v609: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:55:32 compute-0 ceph-mon[74928]: pgmap v609: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:55:33 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v610: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:55:34 compute-0 ceph-mon[74928]: pgmap v610: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:55:35 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v611: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:55:36 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:55:36 compute-0 ceph-mon[74928]: pgmap v611: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:55:37 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v612: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:55:38 compute-0 ceph-mon[74928]: pgmap v612: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:55:39 compute-0 podman[251588]: 2025-11-26 11:55:39.613885005 +0000 UTC m=+0.038410397 container health_status b46e4f533fc070f3c487ba2bec68d1eecba141ffd4a1dcc9b4c347110dd51e0e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 26 11:55:39 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v613: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:55:40 compute-0 ceph-mon[74928]: pgmap v613: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:55:41 compute-0 ceph-mgr[75197]: [balancer INFO root] Optimize plan auto_2025-11-26_11:55:41
Nov 26 11:55:41 compute-0 ceph-mgr[75197]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 11:55:41 compute-0 ceph-mgr[75197]: [balancer INFO root] do_upmap
Nov 26 11:55:41 compute-0 ceph-mgr[75197]: [balancer INFO root] pools ['volumes', 'default.rgw.log', 'images', 'default.rgw.control', 'vms', 'cephfs.cephfs.data', 'default.rgw.meta', 'cephfs.cephfs.meta', '.rgw.root', '.mgr', 'backups']
Nov 26 11:55:41 compute-0 ceph-mgr[75197]: [balancer INFO root] prepared 0/10 changes
Nov 26 11:55:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:55:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:55:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:55:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:55:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:55:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:55:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 11:55:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 11:55:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 11:55:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 11:55:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 11:55:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 11:55:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 11:55:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 11:55:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 11:55:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 11:55:41 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:55:41 compute-0 podman[251605]: 2025-11-26 11:55:41.611137487 +0000 UTC m=+0.035912819 container health_status 5f401b83ca5f86ed33da7d010b1da1561564f07ce95b2e756c93a36d59e54803 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 26 11:55:41 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v614: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:55:42 compute-0 ceph-mon[74928]: pgmap v614: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:55:43 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v615: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:55:44 compute-0 ceph-mon[74928]: pgmap v615: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:55:45 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v616: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:55:46 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:55:46 compute-0 ceph-mon[74928]: pgmap v616: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:55:47 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v617: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:55:48 compute-0 ceph-mon[74928]: pgmap v617: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:55:49 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v618: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:55:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 11:55:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:55:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 11:55:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:55:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:55:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:55:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:55:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:55:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:55:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:55:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:55:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:55:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 26 11:55:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:55:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:55:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:55:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 11:55:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:55:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 11:55:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:55:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:55:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:55:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 11:55:50 compute-0 ceph-mon[74928]: pgmap v618: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:55:51 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:55:51 compute-0 ceph-mon[74928]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Nov 26 11:55:51 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:55:51.611961) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 26 11:55:51 compute-0 ceph-mon[74928]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Nov 26 11:55:51 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764158151612010, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1742, "num_deletes": 250, "total_data_size": 2890186, "memory_usage": 2938760, "flush_reason": "Manual Compaction"}
Nov 26 11:55:51 compute-0 ceph-mon[74928]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Nov 26 11:55:51 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764158151617151, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1641549, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11724, "largest_seqno": 13465, "table_properties": {"data_size": 1635785, "index_size": 2839, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14624, "raw_average_key_size": 20, "raw_value_size": 1623043, "raw_average_value_size": 2244, "num_data_blocks": 131, "num_entries": 723, "num_filter_entries": 723, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764157957, "oldest_key_time": 1764157957, "file_creation_time": 1764158151, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "363c2a1d-8d28-40b7-a8ff-7233f1c9b7d5", "db_session_id": "CJT49RLFB1C6KNYXG0ER", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Nov 26 11:55:51 compute-0 ceph-mon[74928]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 5225 microseconds, and 3207 cpu microseconds.
Nov 26 11:55:51 compute-0 ceph-mon[74928]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 11:55:51 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:55:51.617192) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1641549 bytes OK
Nov 26 11:55:51 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:55:51.617207) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Nov 26 11:55:51 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:55:51.617596) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Nov 26 11:55:51 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:55:51.617606) EVENT_LOG_v1 {"time_micros": 1764158151617603, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 26 11:55:51 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:55:51.617620) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 26 11:55:51 compute-0 ceph-mon[74928]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 2882734, prev total WAL file size 2882734, number of live WAL files 2.
Nov 26 11:55:51 compute-0 ceph-mon[74928]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 11:55:51 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:55:51.618392) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323530' seq:72057594037927935, type:22 .. '6D67727374617400353031' seq:0, type:0; will stop at (end)
Nov 26 11:55:51 compute-0 ceph-mon[74928]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 26 11:55:51 compute-0 ceph-mon[74928]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1603KB)], [29(7843KB)]
Nov 26 11:55:51 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764158151618421, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 9673790, "oldest_snapshot_seqno": -1}
Nov 26 11:55:51 compute-0 podman[251621]: 2025-11-26 11:55:51.632201531 +0000 UTC m=+0.057493739 container health_status cfbeb1268b73bea59211ec8c025ee3818e11127880f6c0675b5fbea39bdd577e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Nov 26 11:55:51 compute-0 ceph-mon[74928]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 3995 keys, 7569157 bytes, temperature: kUnknown
Nov 26 11:55:51 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764158151634242, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 7569157, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7540624, "index_size": 17415, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10053, "raw_key_size": 95163, "raw_average_key_size": 23, "raw_value_size": 7466834, "raw_average_value_size": 1869, "num_data_blocks": 760, "num_entries": 3995, "num_filter_entries": 3995, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764157079, "oldest_key_time": 0, "file_creation_time": 1764158151, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "363c2a1d-8d28-40b7-a8ff-7233f1c9b7d5", "db_session_id": "CJT49RLFB1C6KNYXG0ER", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Nov 26 11:55:51 compute-0 ceph-mon[74928]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 11:55:51 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:55:51.634359) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 7569157 bytes
Nov 26 11:55:51 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:55:51.634740) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 610.4 rd, 477.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 7.7 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(10.5) write-amplify(4.6) OK, records in: 4416, records dropped: 421 output_compression: NoCompression
Nov 26 11:55:51 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:55:51.634752) EVENT_LOG_v1 {"time_micros": 1764158151634747, "job": 12, "event": "compaction_finished", "compaction_time_micros": 15848, "compaction_time_cpu_micros": 12825, "output_level": 6, "num_output_files": 1, "total_output_size": 7569157, "num_input_records": 4416, "num_output_records": 3995, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 26 11:55:51 compute-0 ceph-mon[74928]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 11:55:51 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764158151634983, "job": 12, "event": "table_file_deletion", "file_number": 31}
Nov 26 11:55:51 compute-0 ceph-mon[74928]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 11:55:51 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764158151635807, "job": 12, "event": "table_file_deletion", "file_number": 29}
Nov 26 11:55:51 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:55:51.618340) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 11:55:51 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:55:51.636572) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 11:55:51 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:55:51.636576) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 11:55:51 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:55:51.636577) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 11:55:51 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:55:51.636578) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 11:55:51 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:55:51.636579) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 11:55:51 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v619: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:55:52 compute-0 ceph-mon[74928]: pgmap v619: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:55:53 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v620: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:55:54 compute-0 ceph-mon[74928]: pgmap v620: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:55:55 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v621: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:55:56 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:55:56 compute-0 ceph-mon[74928]: pgmap v621: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:55:57 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v622: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:55:58 compute-0 ceph-mon[74928]: pgmap v622: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:55:59 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v623: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:56:00 compute-0 ceph-mon[74928]: pgmap v623: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:56:01 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:56:01 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v624: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:56:02 compute-0 ceph-mon[74928]: pgmap v624: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:56:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:56:02.988 159928 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 11:56:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:56:02.988 159928 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 11:56:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:56:02.988 159928 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 11:56:03 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v625: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:56:04 compute-0 ceph-mon[74928]: pgmap v625: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:56:05 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v626: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:56:06 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:56:06 compute-0 ceph-mon[74928]: pgmap v626: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:56:07 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v627: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:56:08 compute-0 ceph-mon[74928]: pgmap v627: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:56:09 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v628: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:56:10 compute-0 podman[251646]: 2025-11-26 11:56:10.618165504 +0000 UTC m=+0.039340180 container health_status b46e4f533fc070f3c487ba2bec68d1eecba141ffd4a1dcc9b4c347110dd51e0e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Nov 26 11:56:10 compute-0 ceph-mon[74928]: pgmap v628: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:56:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:56:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:56:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:56:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:56:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:56:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:56:11 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:56:11 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v629: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:56:12 compute-0 podman[251663]: 2025-11-26 11:56:12.605212107 +0000 UTC m=+0.031346341 container health_status 5f401b83ca5f86ed33da7d010b1da1561564f07ce95b2e756c93a36d59e54803 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 11:56:12 compute-0 ceph-mon[74928]: pgmap v629: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:56:13 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v630: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:56:14 compute-0 ceph-mon[74928]: pgmap v630: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:56:15 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v631: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:56:16 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:56:16 compute-0 nova_compute[248203]: 2025-11-26 11:56:16.625 248207 DEBUG oslo_service.periodic_task [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 11:56:16 compute-0 nova_compute[248203]: 2025-11-26 11:56:16.626 248207 DEBUG oslo_service.periodic_task [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 11:56:16 compute-0 nova_compute[248203]: 2025-11-26 11:56:16.626 248207 DEBUG nova.compute.manager [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 26 11:56:16 compute-0 ceph-mon[74928]: pgmap v631: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:56:17 compute-0 nova_compute[248203]: 2025-11-26 11:56:17.622 248207 DEBUG oslo_service.periodic_task [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 11:56:17 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v632: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:56:18 compute-0 nova_compute[248203]: 2025-11-26 11:56:18.625 248207 DEBUG oslo_service.periodic_task [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 11:56:18 compute-0 nova_compute[248203]: 2025-11-26 11:56:18.625 248207 DEBUG oslo_service.periodic_task [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 11:56:18 compute-0 nova_compute[248203]: 2025-11-26 11:56:18.625 248207 DEBUG nova.compute.manager [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 26 11:56:18 compute-0 nova_compute[248203]: 2025-11-26 11:56:18.625 248207 DEBUG nova.compute.manager [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 26 11:56:18 compute-0 nova_compute[248203]: 2025-11-26 11:56:18.646 248207 DEBUG nova.compute.manager [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 26 11:56:18 compute-0 nova_compute[248203]: 2025-11-26 11:56:18.646 248207 DEBUG oslo_service.periodic_task [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 11:56:18 compute-0 ceph-mon[74928]: pgmap v632: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:56:19 compute-0 nova_compute[248203]: 2025-11-26 11:56:19.625 248207 DEBUG oslo_service.periodic_task [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 11:56:19 compute-0 nova_compute[248203]: 2025-11-26 11:56:19.625 248207 DEBUG oslo_service.periodic_task [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 11:56:19 compute-0 nova_compute[248203]: 2025-11-26 11:56:19.625 248207 DEBUG oslo_service.periodic_task [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 11:56:19 compute-0 nova_compute[248203]: 2025-11-26 11:56:19.647 248207 DEBUG oslo_concurrency.lockutils [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 11:56:19 compute-0 nova_compute[248203]: 2025-11-26 11:56:19.647 248207 DEBUG oslo_concurrency.lockutils [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 11:56:19 compute-0 nova_compute[248203]: 2025-11-26 11:56:19.648 248207 DEBUG oslo_concurrency.lockutils [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 11:56:19 compute-0 nova_compute[248203]: 2025-11-26 11:56:19.648 248207 DEBUG nova.compute.resource_tracker [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 26 11:56:19 compute-0 nova_compute[248203]: 2025-11-26 11:56:19.648 248207 DEBUG oslo_concurrency.processutils [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 26 11:56:19 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v633: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:56:19 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 11:56:19 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3088679667' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 11:56:19 compute-0 nova_compute[248203]: 2025-11-26 11:56:19.963 248207 DEBUG oslo_concurrency.processutils [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.315s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 26 11:56:20 compute-0 nova_compute[248203]: 2025-11-26 11:56:20.144 248207 WARNING nova.virt.libvirt.driver [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 26 11:56:20 compute-0 nova_compute[248203]: 2025-11-26 11:56:20.145 248207 DEBUG nova.compute.resource_tracker [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5197MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 26 11:56:20 compute-0 nova_compute[248203]: 2025-11-26 11:56:20.146 248207 DEBUG oslo_concurrency.lockutils [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 11:56:20 compute-0 nova_compute[248203]: 2025-11-26 11:56:20.146 248207 DEBUG oslo_concurrency.lockutils [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 11:56:20 compute-0 nova_compute[248203]: 2025-11-26 11:56:20.193 248207 DEBUG nova.compute.resource_tracker [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 26 11:56:20 compute-0 nova_compute[248203]: 2025-11-26 11:56:20.194 248207 DEBUG nova.compute.resource_tracker [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 26 11:56:20 compute-0 nova_compute[248203]: 2025-11-26 11:56:20.206 248207 DEBUG oslo_concurrency.processutils [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 26 11:56:20 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 11:56:20 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3080514687' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 11:56:20 compute-0 nova_compute[248203]: 2025-11-26 11:56:20.522 248207 DEBUG oslo_concurrency.processutils [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.316s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 26 11:56:20 compute-0 nova_compute[248203]: 2025-11-26 11:56:20.525 248207 DEBUG nova.compute.provider_tree [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Inventory has not changed in ProviderTree for provider: ffdf5b8d-24ca-43b0-a64a-b7345874e7b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 26 11:56:20 compute-0 nova_compute[248203]: 2025-11-26 11:56:20.539 248207 DEBUG nova.scheduler.client.report [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Inventory has not changed for provider ffdf5b8d-24ca-43b0-a64a-b7345874e7b4 based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 26 11:56:20 compute-0 nova_compute[248203]: 2025-11-26 11:56:20.540 248207 DEBUG nova.compute.resource_tracker [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 26 11:56:20 compute-0 nova_compute[248203]: 2025-11-26 11:56:20.540 248207 DEBUG oslo_concurrency.lockutils [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.395s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 11:56:20 compute-0 ceph-mon[74928]: pgmap v633: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:56:20 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3088679667' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 11:56:20 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3080514687' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 11:56:21 compute-0 nova_compute[248203]: 2025-11-26 11:56:21.541 248207 DEBUG oslo_service.periodic_task [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 11:56:21 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:56:21 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v634: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:56:22 compute-0 podman[251723]: 2025-11-26 11:56:22.63213637 +0000 UTC m=+0.054625803 container health_status cfbeb1268b73bea59211ec8c025ee3818e11127880f6c0675b5fbea39bdd577e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 26 11:56:22 compute-0 ceph-mon[74928]: pgmap v634: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:56:23 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v635: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:56:24 compute-0 ceph-mon[74928]: pgmap v635: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:56:25 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v636: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:56:26 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:56:26 compute-0 ceph-mon[74928]: pgmap v636: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:56:27 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v637: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:56:28 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 11:56:28 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1398528340' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 11:56:28 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 11:56:28 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1398528340' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 11:56:28 compute-0 ceph-mon[74928]: pgmap v637: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:56:28 compute-0 ceph-mon[74928]: from='client.? 192.168.122.10:0/1398528340' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 11:56:28 compute-0 ceph-mon[74928]: from='client.? 192.168.122.10:0/1398528340' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 11:56:29 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v638: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:56:29 compute-0 sudo[251746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:56:29 compute-0 sudo[251746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:56:29 compute-0 sudo[251746]: pam_unix(sudo:session): session closed for user root
Nov 26 11:56:29 compute-0 sudo[251771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:56:29 compute-0 sudo[251771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:56:29 compute-0 sudo[251771]: pam_unix(sudo:session): session closed for user root
Nov 26 11:56:29 compute-0 sudo[251796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:56:29 compute-0 sudo[251796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:56:29 compute-0 sudo[251796]: pam_unix(sudo:session): session closed for user root
Nov 26 11:56:29 compute-0 sudo[251821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 26 11:56:29 compute-0 sudo[251821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:56:30 compute-0 sudo[251821]: pam_unix(sudo:session): session closed for user root
Nov 26 11:56:30 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 11:56:30 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:56:30 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 11:56:30 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 11:56:30 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 11:56:30 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:56:30 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 4f8c56eb-1239-4906-a7d1-f3dab3654a2e does not exist
Nov 26 11:56:30 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 1385e5b8-15e8-4a23-9fa2-a31b39142813 does not exist
Nov 26 11:56:30 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 11ec023d-218d-41f2-a465-2aae805447da does not exist
Nov 26 11:56:30 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 11:56:30 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 11:56:30 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 11:56:30 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 11:56:30 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 11:56:30 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:56:30 compute-0 sudo[251874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:56:30 compute-0 sudo[251874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:56:30 compute-0 sudo[251874]: pam_unix(sudo:session): session closed for user root
Nov 26 11:56:30 compute-0 sudo[251899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:56:30 compute-0 sudo[251899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:56:30 compute-0 sudo[251899]: pam_unix(sudo:session): session closed for user root
Nov 26 11:56:30 compute-0 sudo[251924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:56:30 compute-0 sudo[251924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:56:30 compute-0 sudo[251924]: pam_unix(sudo:session): session closed for user root
Nov 26 11:56:30 compute-0 sudo[251949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 26 11:56:30 compute-0 sudo[251949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:56:30 compute-0 podman[252005]: 2025-11-26 11:56:30.547079763 +0000 UTC m=+0.025962683 container create d536e0de84f1047ae94e6988f65f064cc4438d6ff77e2a522d62690e75268293 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:56:30 compute-0 systemd[1]: Started libpod-conmon-d536e0de84f1047ae94e6988f65f064cc4438d6ff77e2a522d62690e75268293.scope.
Nov 26 11:56:30 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:56:30 compute-0 podman[252005]: 2025-11-26 11:56:30.598449417 +0000 UTC m=+0.077332337 container init d536e0de84f1047ae94e6988f65f064cc4438d6ff77e2a522d62690e75268293 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:56:30 compute-0 podman[252005]: 2025-11-26 11:56:30.604186853 +0000 UTC m=+0.083069772 container start d536e0de84f1047ae94e6988f65f064cc4438d6ff77e2a522d62690e75268293 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_ganguly, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 26 11:56:30 compute-0 podman[252005]: 2025-11-26 11:56:30.605226542 +0000 UTC m=+0.084109462 container attach d536e0de84f1047ae94e6988f65f064cc4438d6ff77e2a522d62690e75268293 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_ganguly, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:56:30 compute-0 peaceful_ganguly[252018]: 167 167
Nov 26 11:56:30 compute-0 systemd[1]: libpod-d536e0de84f1047ae94e6988f65f064cc4438d6ff77e2a522d62690e75268293.scope: Deactivated successfully.
Nov 26 11:56:30 compute-0 podman[252005]: 2025-11-26 11:56:30.608948549 +0000 UTC m=+0.087831468 container died d536e0de84f1047ae94e6988f65f064cc4438d6ff77e2a522d62690e75268293 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:56:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-06f6dbb547ade3e10020616ae00672f2b896874190ded82eb9ad2ea8bb7f4926-merged.mount: Deactivated successfully.
Nov 26 11:56:30 compute-0 podman[252005]: 2025-11-26 11:56:30.626249472 +0000 UTC m=+0.105132391 container remove d536e0de84f1047ae94e6988f65f064cc4438d6ff77e2a522d62690e75268293 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 26 11:56:30 compute-0 podman[252005]: 2025-11-26 11:56:30.536530067 +0000 UTC m=+0.015412996 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:56:30 compute-0 systemd[1]: libpod-conmon-d536e0de84f1047ae94e6988f65f064cc4438d6ff77e2a522d62690e75268293.scope: Deactivated successfully.
Nov 26 11:56:30 compute-0 podman[252040]: 2025-11-26 11:56:30.742761704 +0000 UTC m=+0.027219240 container create 27b0ddc338006f7187ca02d5022be38d625cf494ff0d383513a2913246138110 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_lumiere, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:56:30 compute-0 ceph-mon[74928]: pgmap v638: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:56:30 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:56:30 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 11:56:30 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:56:30 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 11:56:30 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 11:56:30 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:56:30 compute-0 systemd[1]: Started libpod-conmon-27b0ddc338006f7187ca02d5022be38d625cf494ff0d383513a2913246138110.scope.
Nov 26 11:56:30 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:56:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b3fd49910020d77b7dc85bca8dd0f0f54c1414cf3dcdfdfcd9d1e3eacf60859/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:56:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b3fd49910020d77b7dc85bca8dd0f0f54c1414cf3dcdfdfcd9d1e3eacf60859/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:56:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b3fd49910020d77b7dc85bca8dd0f0f54c1414cf3dcdfdfcd9d1e3eacf60859/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:56:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b3fd49910020d77b7dc85bca8dd0f0f54c1414cf3dcdfdfcd9d1e3eacf60859/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:56:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b3fd49910020d77b7dc85bca8dd0f0f54c1414cf3dcdfdfcd9d1e3eacf60859/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 11:56:30 compute-0 podman[252040]: 2025-11-26 11:56:30.814463334 +0000 UTC m=+0.098920882 container init 27b0ddc338006f7187ca02d5022be38d625cf494ff0d383513a2913246138110 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_lumiere, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 26 11:56:30 compute-0 podman[252040]: 2025-11-26 11:56:30.820130086 +0000 UTC m=+0.104587624 container start 27b0ddc338006f7187ca02d5022be38d625cf494ff0d383513a2913246138110 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:56:30 compute-0 podman[252040]: 2025-11-26 11:56:30.826019699 +0000 UTC m=+0.110477246 container attach 27b0ddc338006f7187ca02d5022be38d625cf494ff0d383513a2913246138110 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_lumiere, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 26 11:56:30 compute-0 podman[252040]: 2025-11-26 11:56:30.730965537 +0000 UTC m=+0.015423095 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:56:31 compute-0 awesome_lumiere[252053]: --> passed data devices: 0 physical, 3 LVM
Nov 26 11:56:31 compute-0 awesome_lumiere[252053]: --> relative data size: 1.0
Nov 26 11:56:31 compute-0 awesome_lumiere[252053]: --> All data devices are unavailable
Nov 26 11:56:31 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:56:31 compute-0 systemd[1]: libpod-27b0ddc338006f7187ca02d5022be38d625cf494ff0d383513a2913246138110.scope: Deactivated successfully.
Nov 26 11:56:31 compute-0 conmon[252053]: conmon 27b0ddc338006f7187ca <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-27b0ddc338006f7187ca02d5022be38d625cf494ff0d383513a2913246138110.scope/container/memory.events
Nov 26 11:56:31 compute-0 podman[252040]: 2025-11-26 11:56:31.621584488 +0000 UTC m=+0.906042035 container died 27b0ddc338006f7187ca02d5022be38d625cf494ff0d383513a2913246138110 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 26 11:56:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b3fd49910020d77b7dc85bca8dd0f0f54c1414cf3dcdfdfcd9d1e3eacf60859-merged.mount: Deactivated successfully.
Nov 26 11:56:31 compute-0 podman[252040]: 2025-11-26 11:56:31.650594554 +0000 UTC m=+0.935052101 container remove 27b0ddc338006f7187ca02d5022be38d625cf494ff0d383513a2913246138110 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 26 11:56:31 compute-0 systemd[1]: libpod-conmon-27b0ddc338006f7187ca02d5022be38d625cf494ff0d383513a2913246138110.scope: Deactivated successfully.
Nov 26 11:56:31 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v639: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:56:31 compute-0 sudo[251949]: pam_unix(sudo:session): session closed for user root
Nov 26 11:56:31 compute-0 sudo[252091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:56:31 compute-0 sudo[252091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:56:31 compute-0 sudo[252091]: pam_unix(sudo:session): session closed for user root
Nov 26 11:56:31 compute-0 sudo[252116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:56:31 compute-0 sudo[252116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:56:31 compute-0 sudo[252116]: pam_unix(sudo:session): session closed for user root
Nov 26 11:56:31 compute-0 sudo[252141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:56:31 compute-0 sudo[252141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:56:31 compute-0 sudo[252141]: pam_unix(sudo:session): session closed for user root
Nov 26 11:56:31 compute-0 sudo[252166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -- lvm list --format json
Nov 26 11:56:31 compute-0 sudo[252166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:56:32 compute-0 podman[252222]: 2025-11-26 11:56:32.055205766 +0000 UTC m=+0.024862057 container create c772b38515c475e32c903b10d89470fb694f9b8ef5ade6bb79968429e7abe557 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:56:32 compute-0 systemd[1]: Started libpod-conmon-c772b38515c475e32c903b10d89470fb694f9b8ef5ade6bb79968429e7abe557.scope.
Nov 26 11:56:32 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:56:32 compute-0 podman[252222]: 2025-11-26 11:56:32.1041451 +0000 UTC m=+0.073801411 container init c772b38515c475e32c903b10d89470fb694f9b8ef5ade6bb79968429e7abe557 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_cerf, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 26 11:56:32 compute-0 podman[252222]: 2025-11-26 11:56:32.108310712 +0000 UTC m=+0.077967003 container start c772b38515c475e32c903b10d89470fb694f9b8ef5ade6bb79968429e7abe557 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_cerf, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:56:32 compute-0 podman[252222]: 2025-11-26 11:56:32.109344791 +0000 UTC m=+0.079001083 container attach c772b38515c475e32c903b10d89470fb694f9b8ef5ade6bb79968429e7abe557 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_cerf, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 26 11:56:32 compute-0 pedantic_cerf[252236]: 167 167
Nov 26 11:56:32 compute-0 systemd[1]: libpod-c772b38515c475e32c903b10d89470fb694f9b8ef5ade6bb79968429e7abe557.scope: Deactivated successfully.
Nov 26 11:56:32 compute-0 podman[252222]: 2025-11-26 11:56:32.112549412 +0000 UTC m=+0.082205703 container died c772b38515c475e32c903b10d89470fb694f9b8ef5ade6bb79968429e7abe557 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_cerf, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 26 11:56:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca5a8e2de6eecfa90ee9b860f0146d01ef0945e830012640eb9b5d2fbe801621-merged.mount: Deactivated successfully.
Nov 26 11:56:32 compute-0 podman[252222]: 2025-11-26 11:56:32.129255804 +0000 UTC m=+0.098912095 container remove c772b38515c475e32c903b10d89470fb694f9b8ef5ade6bb79968429e7abe557 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_cerf, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:56:32 compute-0 podman[252222]: 2025-11-26 11:56:32.044907774 +0000 UTC m=+0.014564085 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:56:32 compute-0 systemd[1]: libpod-conmon-c772b38515c475e32c903b10d89470fb694f9b8ef5ade6bb79968429e7abe557.scope: Deactivated successfully.
Nov 26 11:56:32 compute-0 podman[252259]: 2025-11-26 11:56:32.244016433 +0000 UTC m=+0.025813981 container create 5ec7ddbf5bcbb653d638e7933ad8de586885d03828c0424f1692b7811a81f58d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:56:32 compute-0 systemd[1]: Started libpod-conmon-5ec7ddbf5bcbb653d638e7933ad8de586885d03828c0424f1692b7811a81f58d.scope.
Nov 26 11:56:32 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:56:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc70b6ae796f511c240df2f51a8d6473707ebd7568931f18a1d1cd4d183d1c6f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:56:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc70b6ae796f511c240df2f51a8d6473707ebd7568931f18a1d1cd4d183d1c6f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:56:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc70b6ae796f511c240df2f51a8d6473707ebd7568931f18a1d1cd4d183d1c6f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:56:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc70b6ae796f511c240df2f51a8d6473707ebd7568931f18a1d1cd4d183d1c6f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:56:32 compute-0 podman[252259]: 2025-11-26 11:56:32.302172992 +0000 UTC m=+0.083970530 container init 5ec7ddbf5bcbb653d638e7933ad8de586885d03828c0424f1692b7811a81f58d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 26 11:56:32 compute-0 podman[252259]: 2025-11-26 11:56:32.307068641 +0000 UTC m=+0.088866179 container start 5ec7ddbf5bcbb653d638e7933ad8de586885d03828c0424f1692b7811a81f58d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_shannon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 26 11:56:32 compute-0 podman[252259]: 2025-11-26 11:56:32.308070449 +0000 UTC m=+0.089867987 container attach 5ec7ddbf5bcbb653d638e7933ad8de586885d03828c0424f1692b7811a81f58d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True)
Nov 26 11:56:32 compute-0 podman[252259]: 2025-11-26 11:56:32.233732138 +0000 UTC m=+0.015529706 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:56:32 compute-0 ceph-mon[74928]: pgmap v639: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:56:32 compute-0 sweet_shannon[252272]: {
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:     "0": [
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:         {
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:             "devices": [
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:                 "/dev/loop3"
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:             ],
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:             "lv_name": "ceph_lv0",
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:             "lv_size": "21470642176",
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a9ad59a0-aa2e-4d92-b571-519d2d145b6a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:             "lv_uuid": "Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ",
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:             "name": "ceph_lv0",
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:             "tags": {
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:                 "ceph.block_uuid": "Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ",
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:                 "ceph.cluster_name": "ceph",
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:                 "ceph.crush_device_class": "",
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:                 "ceph.encrypted": "0",
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:                 "ceph.osd_fsid": "a9ad59a0-aa2e-4d92-b571-519d2d145b6a",
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:                 "ceph.osd_id": "0",
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:                 "ceph.type": "block",
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:                 "ceph.vdo": "0"
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:             },
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:             "type": "block",
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:             "vg_name": "ceph_vg0"
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:         }
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:     ],
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:     "1": [
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:         {
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:             "devices": [
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:                 "/dev/loop4"
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:             ],
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:             "lv_name": "ceph_lv1",
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:             "lv_size": "21470642176",
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2627095b-eef8-4027-bfef-68bf7cb6801f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:             "lv_uuid": "T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS",
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:             "name": "ceph_lv1",
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:             "tags": {
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:                 "ceph.block_uuid": "T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS",
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:                 "ceph.cluster_name": "ceph",
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:                 "ceph.crush_device_class": "",
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:                 "ceph.encrypted": "0",
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:                 "ceph.osd_fsid": "2627095b-eef8-4027-bfef-68bf7cb6801f",
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:                 "ceph.osd_id": "1",
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:                 "ceph.type": "block",
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:                 "ceph.vdo": "0"
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:             },
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:             "type": "block",
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:             "vg_name": "ceph_vg1"
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:         }
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:     ],
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:     "2": [
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:         {
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:             "devices": [
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:                 "/dev/loop5"
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:             ],
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:             "lv_name": "ceph_lv2",
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:             "lv_size": "21470642176",
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d56156fb-7361-4bef-b06b-1320109b4323,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:             "lv_uuid": "PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF",
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:             "name": "ceph_lv2",
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:             "tags": {
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:                 "ceph.block_uuid": "PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF",
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:                 "ceph.cluster_name": "ceph",
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:                 "ceph.crush_device_class": "",
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:                 "ceph.encrypted": "0",
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:                 "ceph.osd_fsid": "d56156fb-7361-4bef-b06b-1320109b4323",
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:                 "ceph.osd_id": "2",
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:                 "ceph.type": "block",
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:                 "ceph.vdo": "0"
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:             },
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:             "type": "block",
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:             "vg_name": "ceph_vg2"
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:         }
Nov 26 11:56:32 compute-0 sweet_shannon[252272]:     ]
Nov 26 11:56:32 compute-0 sweet_shannon[252272]: }
Nov 26 11:56:32 compute-0 systemd[1]: libpod-5ec7ddbf5bcbb653d638e7933ad8de586885d03828c0424f1692b7811a81f58d.scope: Deactivated successfully.
Nov 26 11:56:32 compute-0 podman[252281]: 2025-11-26 11:56:32.979609263 +0000 UTC m=+0.015129271 container died 5ec7ddbf5bcbb653d638e7933ad8de586885d03828c0424f1692b7811a81f58d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_shannon, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:56:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc70b6ae796f511c240df2f51a8d6473707ebd7568931f18a1d1cd4d183d1c6f-merged.mount: Deactivated successfully.
Nov 26 11:56:33 compute-0 podman[252281]: 2025-11-26 11:56:33.009520498 +0000 UTC m=+0.045040484 container remove 5ec7ddbf5bcbb653d638e7933ad8de586885d03828c0424f1692b7811a81f58d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_shannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 26 11:56:33 compute-0 systemd[1]: libpod-conmon-5ec7ddbf5bcbb653d638e7933ad8de586885d03828c0424f1692b7811a81f58d.scope: Deactivated successfully.
Nov 26 11:56:33 compute-0 sudo[252166]: pam_unix(sudo:session): session closed for user root
Nov 26 11:56:33 compute-0 sudo[252293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:56:33 compute-0 sudo[252293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:56:33 compute-0 sudo[252293]: pam_unix(sudo:session): session closed for user root
Nov 26 11:56:33 compute-0 sudo[252318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:56:33 compute-0 sudo[252318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:56:33 compute-0 sudo[252318]: pam_unix(sudo:session): session closed for user root
Nov 26 11:56:33 compute-0 sudo[252343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:56:33 compute-0 sudo[252343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:56:33 compute-0 sudo[252343]: pam_unix(sudo:session): session closed for user root
Nov 26 11:56:33 compute-0 sudo[252368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -- raw list --format json
Nov 26 11:56:33 compute-0 sudo[252368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:56:33 compute-0 podman[252424]: 2025-11-26 11:56:33.411518174 +0000 UTC m=+0.025154259 container create 0f7872b8dab51e039b779cfba47a80f18b1601b365e445aa3015da3a99279c57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chatterjee, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:56:33 compute-0 systemd[1]: Started libpod-conmon-0f7872b8dab51e039b779cfba47a80f18b1601b365e445aa3015da3a99279c57.scope.
Nov 26 11:56:33 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:56:33 compute-0 podman[252424]: 2025-11-26 11:56:33.457604038 +0000 UTC m=+0.071240133 container init 0f7872b8dab51e039b779cfba47a80f18b1601b365e445aa3015da3a99279c57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chatterjee, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:56:33 compute-0 podman[252424]: 2025-11-26 11:56:33.461347764 +0000 UTC m=+0.074983839 container start 0f7872b8dab51e039b779cfba47a80f18b1601b365e445aa3015da3a99279c57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chatterjee, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:56:33 compute-0 podman[252424]: 2025-11-26 11:56:33.46242204 +0000 UTC m=+0.076058104 container attach 0f7872b8dab51e039b779cfba47a80f18b1601b365e445aa3015da3a99279c57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chatterjee, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:56:33 compute-0 gracious_chatterjee[252437]: 167 167
Nov 26 11:56:33 compute-0 systemd[1]: libpod-0f7872b8dab51e039b779cfba47a80f18b1601b365e445aa3015da3a99279c57.scope: Deactivated successfully.
Nov 26 11:56:33 compute-0 conmon[252437]: conmon 0f7872b8dab51e039b77 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0f7872b8dab51e039b779cfba47a80f18b1601b365e445aa3015da3a99279c57.scope/container/memory.events
Nov 26 11:56:33 compute-0 podman[252424]: 2025-11-26 11:56:33.464953711 +0000 UTC m=+0.078589787 container died 0f7872b8dab51e039b779cfba47a80f18b1601b365e445aa3015da3a99279c57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chatterjee, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 26 11:56:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-05ef66877aa45159f4c3ac0d62ac3a63f8815cb4ef8af94a7885c1df4427faf3-merged.mount: Deactivated successfully.
Nov 26 11:56:33 compute-0 podman[252424]: 2025-11-26 11:56:33.48234203 +0000 UTC m=+0.095978105 container remove 0f7872b8dab51e039b779cfba47a80f18b1601b365e445aa3015da3a99279c57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chatterjee, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 26 11:56:33 compute-0 podman[252424]: 2025-11-26 11:56:33.401747505 +0000 UTC m=+0.015383600 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:56:33 compute-0 systemd[1]: libpod-conmon-0f7872b8dab51e039b779cfba47a80f18b1601b365e445aa3015da3a99279c57.scope: Deactivated successfully.
Nov 26 11:56:33 compute-0 podman[252458]: 2025-11-26 11:56:33.595033489 +0000 UTC m=+0.025716038 container create 27e5d015e9f95ac5674919918460807cf113671a13baa20be1e2e7bebedf5618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_agnesi, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 26 11:56:33 compute-0 systemd[1]: Started libpod-conmon-27e5d015e9f95ac5674919918460807cf113671a13baa20be1e2e7bebedf5618.scope.
Nov 26 11:56:33 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:56:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a3a126a62a90f57c338274ad7e69ac06c5c7d8c963f41f02974de3263d232f7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:56:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a3a126a62a90f57c338274ad7e69ac06c5c7d8c963f41f02974de3263d232f7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:56:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a3a126a62a90f57c338274ad7e69ac06c5c7d8c963f41f02974de3263d232f7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:56:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a3a126a62a90f57c338274ad7e69ac06c5c7d8c963f41f02974de3263d232f7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:56:33 compute-0 podman[252458]: 2025-11-26 11:56:33.646592981 +0000 UTC m=+0.077275541 container init 27e5d015e9f95ac5674919918460807cf113671a13baa20be1e2e7bebedf5618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_agnesi, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 26 11:56:33 compute-0 podman[252458]: 2025-11-26 11:56:33.652738406 +0000 UTC m=+0.083420955 container start 27e5d015e9f95ac5674919918460807cf113671a13baa20be1e2e7bebedf5618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_agnesi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Nov 26 11:56:33 compute-0 podman[252458]: 2025-11-26 11:56:33.653745494 +0000 UTC m=+0.084428043 container attach 27e5d015e9f95ac5674919918460807cf113671a13baa20be1e2e7bebedf5618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:56:33 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v640: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:56:33 compute-0 podman[252458]: 2025-11-26 11:56:33.585017448 +0000 UTC m=+0.015699997 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:56:34 compute-0 quirky_agnesi[252471]: {
Nov 26 11:56:34 compute-0 quirky_agnesi[252471]:     "2627095b-eef8-4027-bfef-68bf7cb6801f": {
Nov 26 11:56:34 compute-0 quirky_agnesi[252471]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:56:34 compute-0 quirky_agnesi[252471]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 11:56:34 compute-0 quirky_agnesi[252471]:         "osd_id": 1,
Nov 26 11:56:34 compute-0 quirky_agnesi[252471]:         "osd_uuid": "2627095b-eef8-4027-bfef-68bf7cb6801f",
Nov 26 11:56:34 compute-0 quirky_agnesi[252471]:         "type": "bluestore"
Nov 26 11:56:34 compute-0 quirky_agnesi[252471]:     },
Nov 26 11:56:34 compute-0 quirky_agnesi[252471]:     "a9ad59a0-aa2e-4d92-b571-519d2d145b6a": {
Nov 26 11:56:34 compute-0 quirky_agnesi[252471]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:56:34 compute-0 quirky_agnesi[252471]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 11:56:34 compute-0 quirky_agnesi[252471]:         "osd_id": 0,
Nov 26 11:56:34 compute-0 quirky_agnesi[252471]:         "osd_uuid": "a9ad59a0-aa2e-4d92-b571-519d2d145b6a",
Nov 26 11:56:34 compute-0 quirky_agnesi[252471]:         "type": "bluestore"
Nov 26 11:56:34 compute-0 quirky_agnesi[252471]:     },
Nov 26 11:56:34 compute-0 quirky_agnesi[252471]:     "d56156fb-7361-4bef-b06b-1320109b4323": {
Nov 26 11:56:34 compute-0 quirky_agnesi[252471]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:56:34 compute-0 quirky_agnesi[252471]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 11:56:34 compute-0 quirky_agnesi[252471]:         "osd_id": 2,
Nov 26 11:56:34 compute-0 quirky_agnesi[252471]:         "osd_uuid": "d56156fb-7361-4bef-b06b-1320109b4323",
Nov 26 11:56:34 compute-0 quirky_agnesi[252471]:         "type": "bluestore"
Nov 26 11:56:34 compute-0 quirky_agnesi[252471]:     }
Nov 26 11:56:34 compute-0 quirky_agnesi[252471]: }
Nov 26 11:56:34 compute-0 systemd[1]: libpod-27e5d015e9f95ac5674919918460807cf113671a13baa20be1e2e7bebedf5618.scope: Deactivated successfully.
Nov 26 11:56:34 compute-0 podman[252458]: 2025-11-26 11:56:34.411052536 +0000 UTC m=+0.841735084 container died 27e5d015e9f95ac5674919918460807cf113671a13baa20be1e2e7bebedf5618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_agnesi, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:56:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a3a126a62a90f57c338274ad7e69ac06c5c7d8c963f41f02974de3263d232f7-merged.mount: Deactivated successfully.
Nov 26 11:56:34 compute-0 podman[252458]: 2025-11-26 11:56:34.440460411 +0000 UTC m=+0.871142960 container remove 27e5d015e9f95ac5674919918460807cf113671a13baa20be1e2e7bebedf5618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_agnesi, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 26 11:56:34 compute-0 systemd[1]: libpod-conmon-27e5d015e9f95ac5674919918460807cf113671a13baa20be1e2e7bebedf5618.scope: Deactivated successfully.
Nov 26 11:56:34 compute-0 sudo[252368]: pam_unix(sudo:session): session closed for user root
Nov 26 11:56:34 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 11:56:34 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:56:34 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 11:56:34 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:56:34 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 860e4f50-d85a-40d5-9ae9-7645c2c216b1 does not exist
Nov 26 11:56:34 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev a68c1495-8f39-49e3-881a-7b92ab01e5b5 does not exist
Nov 26 11:56:34 compute-0 sudo[252514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:56:34 compute-0 sudo[252514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:56:34 compute-0 sudo[252514]: pam_unix(sudo:session): session closed for user root
Nov 26 11:56:34 compute-0 sudo[252539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 26 11:56:34 compute-0 sudo[252539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:56:34 compute-0 sudo[252539]: pam_unix(sudo:session): session closed for user root
Nov 26 11:56:34 compute-0 ceph-mon[74928]: pgmap v640: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:56:34 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:56:34 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:56:35 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v641: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:56:36 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:56:36 compute-0 ceph-mon[74928]: pgmap v641: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:56:37 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v642: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:56:38 compute-0 ceph-mon[74928]: pgmap v642: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:56:39 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v643: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:56:40 compute-0 ceph-mon[74928]: pgmap v643: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:56:40 compute-0 podman[252564]: 2025-11-26 11:56:40.857532163 +0000 UTC m=+0.044654057 container health_status b46e4f533fc070f3c487ba2bec68d1eecba141ffd4a1dcc9b4c347110dd51e0e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 26 11:56:41 compute-0 ceph-mgr[75197]: [balancer INFO root] Optimize plan auto_2025-11-26_11:56:41
Nov 26 11:56:41 compute-0 ceph-mgr[75197]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 11:56:41 compute-0 ceph-mgr[75197]: [balancer INFO root] do_upmap
Nov 26 11:56:41 compute-0 ceph-mgr[75197]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.log', 'vms', 'backups', '.mgr', 'cephfs.cephfs.meta', '.rgw.root', 'images', 'volumes', 'default.rgw.meta']
Nov 26 11:56:41 compute-0 ceph-mgr[75197]: [balancer INFO root] prepared 0/10 changes
Nov 26 11:56:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:56:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:56:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:56:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:56:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:56:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:56:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 11:56:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 11:56:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 11:56:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 11:56:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 11:56:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 11:56:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 11:56:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 11:56:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 11:56:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 11:56:41 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:56:41 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v644: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:56:42 compute-0 ceph-mon[74928]: pgmap v644: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:56:43 compute-0 podman[252581]: 2025-11-26 11:56:43.611447101 +0000 UTC m=+0.036186755 container health_status 5f401b83ca5f86ed33da7d010b1da1561564f07ce95b2e756c93a36d59e54803 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 26 11:56:43 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v645: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:56:44 compute-0 ceph-mon[74928]: pgmap v645: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:56:45 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v646: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:56:46 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:56:46 compute-0 ceph-mon[74928]: pgmap v646: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:56:47 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v647: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:56:48 compute-0 ceph-mon[74928]: pgmap v647: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:56:49 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v648: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:56:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 11:56:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:56:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 11:56:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:56:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:56:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:56:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:56:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:56:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:56:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:56:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:56:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:56:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 26 11:56:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:56:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:56:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:56:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 11:56:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:56:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 11:56:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:56:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:56:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:56:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 11:56:50 compute-0 ceph-mon[74928]: pgmap v648: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:56:51 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:56:51 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v649: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:56:52 compute-0 ceph-mon[74928]: pgmap v649: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:56:53 compute-0 podman[252597]: 2025-11-26 11:56:53.63418848 +0000 UTC m=+0.060177769 container health_status cfbeb1268b73bea59211ec8c025ee3818e11127880f6c0675b5fbea39bdd577e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 26 11:56:53 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v650: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:56:54 compute-0 ceph-mon[74928]: pgmap v650: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:56:55 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v651: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:56:56 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:56:56 compute-0 ceph-mon[74928]: pgmap v651: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:56:57 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v652: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:56:58 compute-0 ceph-mon[74928]: pgmap v652: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:56:59 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v653: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:57:00 compute-0 ceph-mon[74928]: pgmap v653: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:57:01 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:57:01 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v654: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:57:02 compute-0 ceph-mon[74928]: pgmap v654: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:57:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:57:02.988 159928 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 11:57:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:57:02.989 159928 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 11:57:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:57:02.989 159928 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 11:57:03 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v655: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:57:04 compute-0 ceph-mon[74928]: pgmap v655: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:57:05 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v656: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:57:06 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:57:06 compute-0 ceph-mon[74928]: pgmap v656: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:57:07 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v657: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:57:08 compute-0 ceph-mon[74928]: pgmap v657: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:57:09 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v658: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:57:10 compute-0 ceph-mon[74928]: pgmap v658: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:57:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:57:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:57:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:57:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:57:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:57:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:57:11 compute-0 podman[252620]: 2025-11-26 11:57:11.610818838 +0000 UTC m=+0.036612708 container health_status b46e4f533fc070f3c487ba2bec68d1eecba141ffd4a1dcc9b4c347110dd51e0e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:57:11 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:57:11 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v659: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:57:12 compute-0 ceph-mon[74928]: pgmap v659: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:57:13 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v660: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:57:14 compute-0 podman[252638]: 2025-11-26 11:57:14.612460472 +0000 UTC m=+0.037526110 container health_status 5f401b83ca5f86ed33da7d010b1da1561564f07ce95b2e756c93a36d59e54803 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118)
Nov 26 11:57:14 compute-0 ceph-mon[74928]: pgmap v660: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:57:15 compute-0 nova_compute[248203]: 2025-11-26 11:57:15.625 248207 DEBUG oslo_service.periodic_task [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 11:57:15 compute-0 nova_compute[248203]: 2025-11-26 11:57:15.626 248207 DEBUG nova.compute.manager [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 26 11:57:15 compute-0 nova_compute[248203]: 2025-11-26 11:57:15.647 248207 DEBUG nova.compute.manager [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 26 11:57:15 compute-0 nova_compute[248203]: 2025-11-26 11:57:15.648 248207 DEBUG oslo_service.periodic_task [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 11:57:15 compute-0 nova_compute[248203]: 2025-11-26 11:57:15.648 248207 DEBUG nova.compute.manager [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 26 11:57:15 compute-0 nova_compute[248203]: 2025-11-26 11:57:15.658 248207 DEBUG oslo_service.periodic_task [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 11:57:15 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v661: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:57:16 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:57:16 compute-0 ceph-mon[74928]: pgmap v661: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:57:17 compute-0 nova_compute[248203]: 2025-11-26 11:57:17.663 248207 DEBUG oslo_service.periodic_task [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 11:57:17 compute-0 nova_compute[248203]: 2025-11-26 11:57:17.664 248207 DEBUG oslo_service.periodic_task [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 11:57:17 compute-0 nova_compute[248203]: 2025-11-26 11:57:17.664 248207 DEBUG nova.compute.manager [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 26 11:57:17 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v662: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:57:18 compute-0 nova_compute[248203]: 2025-11-26 11:57:18.622 248207 DEBUG oslo_service.periodic_task [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 11:57:18 compute-0 ceph-mon[74928]: pgmap v662: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:57:19 compute-0 nova_compute[248203]: 2025-11-26 11:57:19.625 248207 DEBUG oslo_service.periodic_task [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 11:57:19 compute-0 nova_compute[248203]: 2025-11-26 11:57:19.626 248207 DEBUG oslo_service.periodic_task [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 11:57:19 compute-0 nova_compute[248203]: 2025-11-26 11:57:19.651 248207 DEBUG oslo_concurrency.lockutils [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 11:57:19 compute-0 nova_compute[248203]: 2025-11-26 11:57:19.651 248207 DEBUG oslo_concurrency.lockutils [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 11:57:19 compute-0 nova_compute[248203]: 2025-11-26 11:57:19.651 248207 DEBUG oslo_concurrency.lockutils [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 11:57:19 compute-0 nova_compute[248203]: 2025-11-26 11:57:19.652 248207 DEBUG nova.compute.resource_tracker [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 26 11:57:19 compute-0 nova_compute[248203]: 2025-11-26 11:57:19.652 248207 DEBUG oslo_concurrency.processutils [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 26 11:57:19 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v663: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:57:19 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 11:57:19 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1568447415' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 11:57:19 compute-0 nova_compute[248203]: 2025-11-26 11:57:19.969 248207 DEBUG oslo_concurrency.processutils [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.318s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 26 11:57:20 compute-0 nova_compute[248203]: 2025-11-26 11:57:20.163 248207 WARNING nova.virt.libvirt.driver [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 26 11:57:20 compute-0 nova_compute[248203]: 2025-11-26 11:57:20.164 248207 DEBUG nova.compute.resource_tracker [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5210MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 26 11:57:20 compute-0 nova_compute[248203]: 2025-11-26 11:57:20.164 248207 DEBUG oslo_concurrency.lockutils [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 11:57:20 compute-0 nova_compute[248203]: 2025-11-26 11:57:20.164 248207 DEBUG oslo_concurrency.lockutils [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 11:57:20 compute-0 nova_compute[248203]: 2025-11-26 11:57:20.351 248207 DEBUG nova.compute.resource_tracker [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 26 11:57:20 compute-0 nova_compute[248203]: 2025-11-26 11:57:20.352 248207 DEBUG nova.compute.resource_tracker [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 26 11:57:20 compute-0 nova_compute[248203]: 2025-11-26 11:57:20.431 248207 DEBUG nova.scheduler.client.report [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Refreshing inventories for resource provider ffdf5b8d-24ca-43b0-a64a-b7345874e7b4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 26 11:57:20 compute-0 nova_compute[248203]: 2025-11-26 11:57:20.506 248207 DEBUG nova.scheduler.client.report [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Updating ProviderTree inventory for provider ffdf5b8d-24ca-43b0-a64a-b7345874e7b4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 26 11:57:20 compute-0 nova_compute[248203]: 2025-11-26 11:57:20.506 248207 DEBUG nova.compute.provider_tree [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Updating inventory in ProviderTree for provider ffdf5b8d-24ca-43b0-a64a-b7345874e7b4 with inventory: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 26 11:57:20 compute-0 nova_compute[248203]: 2025-11-26 11:57:20.521 248207 DEBUG nova.scheduler.client.report [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Refreshing aggregate associations for resource provider ffdf5b8d-24ca-43b0-a64a-b7345874e7b4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 26 11:57:20 compute-0 nova_compute[248203]: 2025-11-26 11:57:20.537 248207 DEBUG nova.scheduler.client.report [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Refreshing trait associations for resource provider ffdf5b8d-24ca-43b0-a64a-b7345874e7b4, traits: COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AVX512VAES,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SVM,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_BMI,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AMD_SVM,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AESNI,COMPUTE_DEVICE_TAGGING,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_BMI2,HW_CPU_X86_SSSE3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_F16C,HW_CPU_X86_AVX512VPCLMULQDQ,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SHA,COMPUTE_RESCUE_BFV,HW_CPU_X86_ABM,HW_CPU_X86_FMA3,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NODE,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AVX,COMPUTE_ACCELERATORS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 26 11:57:20 compute-0 nova_compute[248203]: 2025-11-26 11:57:20.547 248207 DEBUG oslo_concurrency.processutils [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 26 11:57:20 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 11:57:20 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/452013539' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 11:57:20 compute-0 ceph-mon[74928]: pgmap v663: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:57:20 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1568447415' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 11:57:20 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/452013539' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 11:57:20 compute-0 nova_compute[248203]: 2025-11-26 11:57:20.866 248207 DEBUG oslo_concurrency.processutils [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.318s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 26 11:57:20 compute-0 nova_compute[248203]: 2025-11-26 11:57:20.869 248207 DEBUG nova.compute.provider_tree [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Inventory has not changed in ProviderTree for provider: ffdf5b8d-24ca-43b0-a64a-b7345874e7b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 26 11:57:20 compute-0 nova_compute[248203]: 2025-11-26 11:57:20.884 248207 DEBUG nova.scheduler.client.report [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Inventory has not changed for provider ffdf5b8d-24ca-43b0-a64a-b7345874e7b4 based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 26 11:57:20 compute-0 nova_compute[248203]: 2025-11-26 11:57:20.885 248207 DEBUG nova.compute.resource_tracker [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 26 11:57:20 compute-0 nova_compute[248203]: 2025-11-26 11:57:20.885 248207 DEBUG oslo_concurrency.lockutils [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.721s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 11:57:21 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:57:21 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v664: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:57:21 compute-0 nova_compute[248203]: 2025-11-26 11:57:21.886 248207 DEBUG oslo_service.periodic_task [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 11:57:21 compute-0 nova_compute[248203]: 2025-11-26 11:57:21.886 248207 DEBUG nova.compute.manager [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 26 11:57:21 compute-0 nova_compute[248203]: 2025-11-26 11:57:21.886 248207 DEBUG nova.compute.manager [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 26 11:57:21 compute-0 nova_compute[248203]: 2025-11-26 11:57:21.896 248207 DEBUG nova.compute.manager [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 26 11:57:21 compute-0 nova_compute[248203]: 2025-11-26 11:57:21.897 248207 DEBUG oslo_service.periodic_task [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 11:57:21 compute-0 nova_compute[248203]: 2025-11-26 11:57:21.897 248207 DEBUG oslo_service.periodic_task [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 11:57:22 compute-0 nova_compute[248203]: 2025-11-26 11:57:22.625 248207 DEBUG oslo_service.periodic_task [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 11:57:22 compute-0 ceph-mon[74928]: pgmap v664: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:57:23 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v665: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:57:24 compute-0 podman[252699]: 2025-11-26 11:57:24.626338305 +0000 UTC m=+0.051798224 container health_status cfbeb1268b73bea59211ec8c025ee3818e11127880f6c0675b5fbea39bdd577e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 26 11:57:24 compute-0 ceph-mon[74928]: pgmap v665: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:57:25 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v666: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:57:26 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:57:26 compute-0 ceph-mon[74928]: pgmap v666: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:57:27 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v667: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:57:28 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 11:57:28 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/75415739' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 11:57:28 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 11:57:28 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/75415739' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 11:57:28 compute-0 ceph-mon[74928]: pgmap v667: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:57:28 compute-0 ceph-mon[74928]: from='client.? 192.168.122.10:0/75415739' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 11:57:28 compute-0 ceph-mon[74928]: from='client.? 192.168.122.10:0/75415739' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 11:57:29 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v668: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:57:30 compute-0 ceph-mon[74928]: pgmap v668: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:57:31 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:57:31 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v669: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:57:32 compute-0 ceph-mon[74928]: pgmap v669: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:57:33 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v670: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:57:34 compute-0 sudo[252722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:57:34 compute-0 sudo[252722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:57:34 compute-0 sudo[252722]: pam_unix(sudo:session): session closed for user root
Nov 26 11:57:34 compute-0 sudo[252747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:57:34 compute-0 sudo[252747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:57:34 compute-0 sudo[252747]: pam_unix(sudo:session): session closed for user root
Nov 26 11:57:34 compute-0 sudo[252772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:57:34 compute-0 sudo[252772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:57:34 compute-0 sudo[252772]: pam_unix(sudo:session): session closed for user root
Nov 26 11:57:34 compute-0 sudo[252797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 26 11:57:34 compute-0 sudo[252797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:57:34 compute-0 ceph-mon[74928]: pgmap v670: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:57:35 compute-0 sudo[252797]: pam_unix(sudo:session): session closed for user root
Nov 26 11:57:35 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 11:57:35 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:57:35 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 11:57:35 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 11:57:35 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 11:57:35 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:57:35 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 4af5a752-3854-47c1-a7ae-c4b5805153bc does not exist
Nov 26 11:57:35 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 3b9c9aca-a3db-4471-aae2-2805a13df754 does not exist
Nov 26 11:57:35 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev df6e88ae-d9b1-44b5-b771-b098a19919c9 does not exist
Nov 26 11:57:35 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 11:57:35 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 11:57:35 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 11:57:35 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 11:57:35 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 11:57:35 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:57:35 compute-0 sudo[252850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:57:35 compute-0 sudo[252850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:57:35 compute-0 sudo[252850]: pam_unix(sudo:session): session closed for user root
Nov 26 11:57:35 compute-0 sudo[252875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:57:35 compute-0 sudo[252875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:57:35 compute-0 sudo[252875]: pam_unix(sudo:session): session closed for user root
Nov 26 11:57:35 compute-0 sudo[252900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:57:35 compute-0 sudo[252900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:57:35 compute-0 sudo[252900]: pam_unix(sudo:session): session closed for user root
Nov 26 11:57:35 compute-0 sudo[252925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 26 11:57:35 compute-0 sudo[252925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:57:35 compute-0 podman[252980]: 2025-11-26 11:57:35.489424532 +0000 UTC m=+0.026270550 container create 6744292eafc774de7104e0cd1d7fba6aa12c9ebafcdda7ed5b0ea3fbecc1b205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_easley, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 26 11:57:35 compute-0 systemd[1]: Started libpod-conmon-6744292eafc774de7104e0cd1d7fba6aa12c9ebafcdda7ed5b0ea3fbecc1b205.scope.
Nov 26 11:57:35 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:57:35 compute-0 podman[252980]: 2025-11-26 11:57:35.545336798 +0000 UTC m=+0.082182815 container init 6744292eafc774de7104e0cd1d7fba6aa12c9ebafcdda7ed5b0ea3fbecc1b205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 26 11:57:35 compute-0 podman[252980]: 2025-11-26 11:57:35.551350054 +0000 UTC m=+0.088196071 container start 6744292eafc774de7104e0cd1d7fba6aa12c9ebafcdda7ed5b0ea3fbecc1b205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_easley, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:57:35 compute-0 podman[252980]: 2025-11-26 11:57:35.552604719 +0000 UTC m=+0.089450737 container attach 6744292eafc774de7104e0cd1d7fba6aa12c9ebafcdda7ed5b0ea3fbecc1b205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_easley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 26 11:57:35 compute-0 angry_easley[252993]: 167 167
Nov 26 11:57:35 compute-0 systemd[1]: libpod-6744292eafc774de7104e0cd1d7fba6aa12c9ebafcdda7ed5b0ea3fbecc1b205.scope: Deactivated successfully.
Nov 26 11:57:35 compute-0 podman[252980]: 2025-11-26 11:57:35.555131733 +0000 UTC m=+0.091977742 container died 6744292eafc774de7104e0cd1d7fba6aa12c9ebafcdda7ed5b0ea3fbecc1b205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_easley, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 26 11:57:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa63aab42f84dc30165fb2c2be99194687bf9cb6f6d0a287abe418b50ac12b73-merged.mount: Deactivated successfully.
Nov 26 11:57:35 compute-0 podman[252980]: 2025-11-26 11:57:35.575044427 +0000 UTC m=+0.111890445 container remove 6744292eafc774de7104e0cd1d7fba6aa12c9ebafcdda7ed5b0ea3fbecc1b205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_easley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:57:35 compute-0 podman[252980]: 2025-11-26 11:57:35.478601245 +0000 UTC m=+0.015447284 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:57:35 compute-0 systemd[1]: libpod-conmon-6744292eafc774de7104e0cd1d7fba6aa12c9ebafcdda7ed5b0ea3fbecc1b205.scope: Deactivated successfully.
Nov 26 11:57:35 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v671: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:57:35 compute-0 podman[253015]: 2025-11-26 11:57:35.694160774 +0000 UTC m=+0.028388814 container create f5df2d32b28950b2addf33c74b45ade5ae698695a8459a5468f130afa72a9fff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_euler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 26 11:57:35 compute-0 systemd[1]: Started libpod-conmon-f5df2d32b28950b2addf33c74b45ade5ae698695a8459a5468f130afa72a9fff.scope.
Nov 26 11:57:35 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:57:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7263c5fba50b396f1cd975e67af6a2d7d896c6632db16ca58b613ae5c2f9692f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:57:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7263c5fba50b396f1cd975e67af6a2d7d896c6632db16ca58b613ae5c2f9692f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:57:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7263c5fba50b396f1cd975e67af6a2d7d896c6632db16ca58b613ae5c2f9692f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:57:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7263c5fba50b396f1cd975e67af6a2d7d896c6632db16ca58b613ae5c2f9692f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:57:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7263c5fba50b396f1cd975e67af6a2d7d896c6632db16ca58b613ae5c2f9692f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 11:57:35 compute-0 podman[253015]: 2025-11-26 11:57:35.757246333 +0000 UTC m=+0.091474373 container init f5df2d32b28950b2addf33c74b45ade5ae698695a8459a5468f130afa72a9fff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_euler, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507)
Nov 26 11:57:35 compute-0 podman[253015]: 2025-11-26 11:57:35.761537303 +0000 UTC m=+0.095765334 container start f5df2d32b28950b2addf33c74b45ade5ae698695a8459a5468f130afa72a9fff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:57:35 compute-0 podman[253015]: 2025-11-26 11:57:35.76271311 +0000 UTC m=+0.096941140 container attach f5df2d32b28950b2addf33c74b45ade5ae698695a8459a5468f130afa72a9fff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_euler, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:57:35 compute-0 podman[253015]: 2025-11-26 11:57:35.682068014 +0000 UTC m=+0.016296065 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:57:35 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:57:35 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 11:57:35 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:57:35 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 11:57:35 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 11:57:35 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:57:36 compute-0 funny_euler[253028]: --> passed data devices: 0 physical, 3 LVM
Nov 26 11:57:36 compute-0 funny_euler[253028]: --> relative data size: 1.0
Nov 26 11:57:36 compute-0 funny_euler[253028]: --> All data devices are unavailable
Nov 26 11:57:36 compute-0 systemd[1]: libpod-f5df2d32b28950b2addf33c74b45ade5ae698695a8459a5468f130afa72a9fff.scope: Deactivated successfully.
Nov 26 11:57:36 compute-0 podman[253057]: 2025-11-26 11:57:36.606865073 +0000 UTC m=+0.016965598 container died f5df2d32b28950b2addf33c74b45ade5ae698695a8459a5468f130afa72a9fff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_euler, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 26 11:57:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-7263c5fba50b396f1cd975e67af6a2d7d896c6632db16ca58b613ae5c2f9692f-merged.mount: Deactivated successfully.
Nov 26 11:57:36 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:57:36 compute-0 podman[253057]: 2025-11-26 11:57:36.636658524 +0000 UTC m=+0.046759029 container remove f5df2d32b28950b2addf33c74b45ade5ae698695a8459a5468f130afa72a9fff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 26 11:57:36 compute-0 systemd[1]: libpod-conmon-f5df2d32b28950b2addf33c74b45ade5ae698695a8459a5468f130afa72a9fff.scope: Deactivated successfully.
Nov 26 11:57:36 compute-0 sudo[252925]: pam_unix(sudo:session): session closed for user root
Nov 26 11:57:36 compute-0 sudo[253069]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:57:36 compute-0 sudo[253069]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:57:36 compute-0 sudo[253069]: pam_unix(sudo:session): session closed for user root
Nov 26 11:57:36 compute-0 sudo[253094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:57:36 compute-0 sudo[253094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:57:36 compute-0 sudo[253094]: pam_unix(sudo:session): session closed for user root
Nov 26 11:57:36 compute-0 sudo[253119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:57:36 compute-0 sudo[253119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:57:36 compute-0 sudo[253119]: pam_unix(sudo:session): session closed for user root
Nov 26 11:57:36 compute-0 sudo[253144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -- lvm list --format json
Nov 26 11:57:36 compute-0 sudo[253144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:57:36 compute-0 ceph-mon[74928]: pgmap v671: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:57:37 compute-0 podman[253201]: 2025-11-26 11:57:37.042716472 +0000 UTC m=+0.025670840 container create 67ba26052fc0b6a3caa6b7f83193675656f9e424391179d72203bf2715dc0105 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_hofstadter, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:57:37 compute-0 systemd[1]: Started libpod-conmon-67ba26052fc0b6a3caa6b7f83193675656f9e424391179d72203bf2715dc0105.scope.
Nov 26 11:57:37 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:57:37 compute-0 podman[253201]: 2025-11-26 11:57:37.099671533 +0000 UTC m=+0.082625892 container init 67ba26052fc0b6a3caa6b7f83193675656f9e424391179d72203bf2715dc0105 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_hofstadter, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 26 11:57:37 compute-0 podman[253201]: 2025-11-26 11:57:37.103981308 +0000 UTC m=+0.086935667 container start 67ba26052fc0b6a3caa6b7f83193675656f9e424391179d72203bf2715dc0105 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Nov 26 11:57:37 compute-0 podman[253201]: 2025-11-26 11:57:37.105045435 +0000 UTC m=+0.087999813 container attach 67ba26052fc0b6a3caa6b7f83193675656f9e424391179d72203bf2715dc0105 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_hofstadter, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:57:37 compute-0 recursing_hofstadter[253214]: 167 167
Nov 26 11:57:37 compute-0 systemd[1]: libpod-67ba26052fc0b6a3caa6b7f83193675656f9e424391179d72203bf2715dc0105.scope: Deactivated successfully.
Nov 26 11:57:37 compute-0 podman[253201]: 2025-11-26 11:57:37.107369456 +0000 UTC m=+0.090323815 container died 67ba26052fc0b6a3caa6b7f83193675656f9e424391179d72203bf2715dc0105 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 26 11:57:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-215de1ee8316882363c8478b3a646d8c55f7705a1ebe0ae1557d9f0d016d52dd-merged.mount: Deactivated successfully.
Nov 26 11:57:37 compute-0 podman[253201]: 2025-11-26 11:57:37.126281071 +0000 UTC m=+0.109235430 container remove 67ba26052fc0b6a3caa6b7f83193675656f9e424391179d72203bf2715dc0105 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 26 11:57:37 compute-0 podman[253201]: 2025-11-26 11:57:37.032610278 +0000 UTC m=+0.015564656 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:57:37 compute-0 systemd[1]: libpod-conmon-67ba26052fc0b6a3caa6b7f83193675656f9e424391179d72203bf2715dc0105.scope: Deactivated successfully.
Nov 26 11:57:37 compute-0 podman[253235]: 2025-11-26 11:57:37.241670734 +0000 UTC m=+0.026348398 container create de0f7de62c462bf0056bf4ad8ae6465c45f56343b35f873774a3c20f0d84a6c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ellis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 26 11:57:37 compute-0 systemd[1]: Started libpod-conmon-de0f7de62c462bf0056bf4ad8ae6465c45f56343b35f873774a3c20f0d84a6c2.scope.
Nov 26 11:57:37 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:57:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfa42a62367ab0fe8a85fa5beddc672b85d85d83eefcc47a09c625ad07aec22b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:57:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfa42a62367ab0fe8a85fa5beddc672b85d85d83eefcc47a09c625ad07aec22b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:57:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfa42a62367ab0fe8a85fa5beddc672b85d85d83eefcc47a09c625ad07aec22b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:57:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfa42a62367ab0fe8a85fa5beddc672b85d85d83eefcc47a09c625ad07aec22b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:57:37 compute-0 podman[253235]: 2025-11-26 11:57:37.289753647 +0000 UTC m=+0.074431321 container init de0f7de62c462bf0056bf4ad8ae6465c45f56343b35f873774a3c20f0d84a6c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ellis, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 26 11:57:37 compute-0 podman[253235]: 2025-11-26 11:57:37.295456509 +0000 UTC m=+0.080134163 container start de0f7de62c462bf0056bf4ad8ae6465c45f56343b35f873774a3c20f0d84a6c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ellis, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 26 11:57:37 compute-0 podman[253235]: 2025-11-26 11:57:37.296538219 +0000 UTC m=+0.081215873 container attach de0f7de62c462bf0056bf4ad8ae6465c45f56343b35f873774a3c20f0d84a6c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ellis, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 26 11:57:37 compute-0 podman[253235]: 2025-11-26 11:57:37.231365514 +0000 UTC m=+0.016043188 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:57:37 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v672: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:57:37 compute-0 adoring_ellis[253248]: {
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:     "0": [
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:         {
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:             "devices": [
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:                 "/dev/loop3"
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:             ],
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:             "lv_name": "ceph_lv0",
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:             "lv_size": "21470642176",
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a9ad59a0-aa2e-4d92-b571-519d2d145b6a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:             "lv_uuid": "Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ",
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:             "name": "ceph_lv0",
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:             "tags": {
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:                 "ceph.block_uuid": "Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ",
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:                 "ceph.cluster_name": "ceph",
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:                 "ceph.crush_device_class": "",
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:                 "ceph.encrypted": "0",
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:                 "ceph.osd_fsid": "a9ad59a0-aa2e-4d92-b571-519d2d145b6a",
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:                 "ceph.osd_id": "0",
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:                 "ceph.type": "block",
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:                 "ceph.vdo": "0"
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:             },
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:             "type": "block",
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:             "vg_name": "ceph_vg0"
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:         }
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:     ],
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:     "1": [
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:         {
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:             "devices": [
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:                 "/dev/loop4"
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:             ],
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:             "lv_name": "ceph_lv1",
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:             "lv_size": "21470642176",
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2627095b-eef8-4027-bfef-68bf7cb6801f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:             "lv_uuid": "T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS",
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:             "name": "ceph_lv1",
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:             "tags": {
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:                 "ceph.block_uuid": "T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS",
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:                 "ceph.cluster_name": "ceph",
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:                 "ceph.crush_device_class": "",
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:                 "ceph.encrypted": "0",
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:                 "ceph.osd_fsid": "2627095b-eef8-4027-bfef-68bf7cb6801f",
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:                 "ceph.osd_id": "1",
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:                 "ceph.type": "block",
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:                 "ceph.vdo": "0"
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:             },
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:             "type": "block",
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:             "vg_name": "ceph_vg1"
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:         }
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:     ],
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:     "2": [
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:         {
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:             "devices": [
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:                 "/dev/loop5"
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:             ],
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:             "lv_name": "ceph_lv2",
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:             "lv_size": "21470642176",
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d56156fb-7361-4bef-b06b-1320109b4323,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:             "lv_uuid": "PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF",
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:             "name": "ceph_lv2",
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:             "tags": {
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:                 "ceph.block_uuid": "PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF",
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:                 "ceph.cluster_name": "ceph",
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:                 "ceph.crush_device_class": "",
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:                 "ceph.encrypted": "0",
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:                 "ceph.osd_fsid": "d56156fb-7361-4bef-b06b-1320109b4323",
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:                 "ceph.osd_id": "2",
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:                 "ceph.type": "block",
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:                 "ceph.vdo": "0"
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:             },
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:             "type": "block",
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:             "vg_name": "ceph_vg2"
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:         }
Nov 26 11:57:37 compute-0 adoring_ellis[253248]:     ]
Nov 26 11:57:37 compute-0 adoring_ellis[253248]: }
Nov 26 11:57:37 compute-0 systemd[1]: libpod-de0f7de62c462bf0056bf4ad8ae6465c45f56343b35f873774a3c20f0d84a6c2.scope: Deactivated successfully.
Nov 26 11:57:37 compute-0 conmon[253248]: conmon de0f7de62c462bf0056b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-de0f7de62c462bf0056bf4ad8ae6465c45f56343b35f873774a3c20f0d84a6c2.scope/container/memory.events
Nov 26 11:57:37 compute-0 podman[253235]: 2025-11-26 11:57:37.936298298 +0000 UTC m=+0.720975952 container died de0f7de62c462bf0056bf4ad8ae6465c45f56343b35f873774a3c20f0d84a6c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ellis, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 26 11:57:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-bfa42a62367ab0fe8a85fa5beddc672b85d85d83eefcc47a09c625ad07aec22b-merged.mount: Deactivated successfully.
Nov 26 11:57:37 compute-0 podman[253235]: 2025-11-26 11:57:37.969053595 +0000 UTC m=+0.753731249 container remove de0f7de62c462bf0056bf4ad8ae6465c45f56343b35f873774a3c20f0d84a6c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ellis, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:57:37 compute-0 systemd[1]: libpod-conmon-de0f7de62c462bf0056bf4ad8ae6465c45f56343b35f873774a3c20f0d84a6c2.scope: Deactivated successfully.
Nov 26 11:57:37 compute-0 sudo[253144]: pam_unix(sudo:session): session closed for user root
Nov 26 11:57:38 compute-0 sudo[253267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:57:38 compute-0 sudo[253267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:57:38 compute-0 sudo[253267]: pam_unix(sudo:session): session closed for user root
Nov 26 11:57:38 compute-0 sudo[253292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:57:38 compute-0 sudo[253292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:57:38 compute-0 sudo[253292]: pam_unix(sudo:session): session closed for user root
Nov 26 11:57:38 compute-0 sudo[253317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:57:38 compute-0 sudo[253317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:57:38 compute-0 sudo[253317]: pam_unix(sudo:session): session closed for user root
Nov 26 11:57:38 compute-0 sudo[253342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -- raw list --format json
Nov 26 11:57:38 compute-0 sudo[253342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:57:38 compute-0 podman[253397]: 2025-11-26 11:57:38.376445456 +0000 UTC m=+0.027819882 container create f922697b3ce2f32ea5d8abcbe9dea2907543ace704db0a16e22fca54d09c9866 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_gagarin, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:57:38 compute-0 systemd[1]: Started libpod-conmon-f922697b3ce2f32ea5d8abcbe9dea2907543ace704db0a16e22fca54d09c9866.scope.
Nov 26 11:57:38 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:57:38 compute-0 podman[253397]: 2025-11-26 11:57:38.424831973 +0000 UTC m=+0.076206418 container init f922697b3ce2f32ea5d8abcbe9dea2907543ace704db0a16e22fca54d09c9866 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:57:38 compute-0 podman[253397]: 2025-11-26 11:57:38.429243179 +0000 UTC m=+0.080617604 container start f922697b3ce2f32ea5d8abcbe9dea2907543ace704db0a16e22fca54d09c9866 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:57:38 compute-0 podman[253397]: 2025-11-26 11:57:38.430555303 +0000 UTC m=+0.081929748 container attach f922697b3ce2f32ea5d8abcbe9dea2907543ace704db0a16e22fca54d09c9866 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 26 11:57:38 compute-0 elated_gagarin[253411]: 167 167
Nov 26 11:57:38 compute-0 systemd[1]: libpod-f922697b3ce2f32ea5d8abcbe9dea2907543ace704db0a16e22fca54d09c9866.scope: Deactivated successfully.
Nov 26 11:57:38 compute-0 conmon[253411]: conmon f922697b3ce2f32ea5d8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f922697b3ce2f32ea5d8abcbe9dea2907543ace704db0a16e22fca54d09c9866.scope/container/memory.events
Nov 26 11:57:38 compute-0 podman[253397]: 2025-11-26 11:57:38.365419057 +0000 UTC m=+0.016793483 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:57:38 compute-0 podman[253416]: 2025-11-26 11:57:38.462973824 +0000 UTC m=+0.017272366 container died f922697b3ce2f32ea5d8abcbe9dea2907543ace704db0a16e22fca54d09c9866 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_gagarin, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:57:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-822405ed002c8fef68ed9bb593941bf291b6259c458f0eb2a18d90e1eac260d0-merged.mount: Deactivated successfully.
Nov 26 11:57:38 compute-0 podman[253416]: 2025-11-26 11:57:38.480046162 +0000 UTC m=+0.034344694 container remove f922697b3ce2f32ea5d8abcbe9dea2907543ace704db0a16e22fca54d09c9866 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_gagarin, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 26 11:57:38 compute-0 systemd[1]: libpod-conmon-f922697b3ce2f32ea5d8abcbe9dea2907543ace704db0a16e22fca54d09c9866.scope: Deactivated successfully.
Nov 26 11:57:38 compute-0 podman[253434]: 2025-11-26 11:57:38.598604527 +0000 UTC m=+0.026928762 container create afaec639d63895bd2eb8ff10fcababb464bb050234e404d0686be73bc73bc288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 26 11:57:38 compute-0 systemd[1]: Started libpod-conmon-afaec639d63895bd2eb8ff10fcababb464bb050234e404d0686be73bc73bc288.scope.
Nov 26 11:57:38 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:57:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4710b6bf79fb3dfa3cb47e95511cc0ba630cd5b99f7c2debf5dc38347dd71ac2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:57:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4710b6bf79fb3dfa3cb47e95511cc0ba630cd5b99f7c2debf5dc38347dd71ac2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:57:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4710b6bf79fb3dfa3cb47e95511cc0ba630cd5b99f7c2debf5dc38347dd71ac2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:57:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4710b6bf79fb3dfa3cb47e95511cc0ba630cd5b99f7c2debf5dc38347dd71ac2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:57:38 compute-0 podman[253434]: 2025-11-26 11:57:38.659493556 +0000 UTC m=+0.087817779 container init afaec639d63895bd2eb8ff10fcababb464bb050234e404d0686be73bc73bc288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_sutherland, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:57:38 compute-0 podman[253434]: 2025-11-26 11:57:38.664361192 +0000 UTC m=+0.092685417 container start afaec639d63895bd2eb8ff10fcababb464bb050234e404d0686be73bc73bc288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 26 11:57:38 compute-0 podman[253434]: 2025-11-26 11:57:38.665516271 +0000 UTC m=+0.093840494 container attach afaec639d63895bd2eb8ff10fcababb464bb050234e404d0686be73bc73bc288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:57:38 compute-0 podman[253434]: 2025-11-26 11:57:38.586941258 +0000 UTC m=+0.015265502 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:57:38 compute-0 ceph-mon[74928]: pgmap v672: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:57:39 compute-0 elastic_sutherland[253448]: {
Nov 26 11:57:39 compute-0 elastic_sutherland[253448]:     "2627095b-eef8-4027-bfef-68bf7cb6801f": {
Nov 26 11:57:39 compute-0 elastic_sutherland[253448]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:57:39 compute-0 elastic_sutherland[253448]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 11:57:39 compute-0 elastic_sutherland[253448]:         "osd_id": 1,
Nov 26 11:57:39 compute-0 elastic_sutherland[253448]:         "osd_uuid": "2627095b-eef8-4027-bfef-68bf7cb6801f",
Nov 26 11:57:39 compute-0 elastic_sutherland[253448]:         "type": "bluestore"
Nov 26 11:57:39 compute-0 elastic_sutherland[253448]:     },
Nov 26 11:57:39 compute-0 elastic_sutherland[253448]:     "a9ad59a0-aa2e-4d92-b571-519d2d145b6a": {
Nov 26 11:57:39 compute-0 elastic_sutherland[253448]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:57:39 compute-0 elastic_sutherland[253448]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 11:57:39 compute-0 elastic_sutherland[253448]:         "osd_id": 0,
Nov 26 11:57:39 compute-0 elastic_sutherland[253448]:         "osd_uuid": "a9ad59a0-aa2e-4d92-b571-519d2d145b6a",
Nov 26 11:57:39 compute-0 elastic_sutherland[253448]:         "type": "bluestore"
Nov 26 11:57:39 compute-0 elastic_sutherland[253448]:     },
Nov 26 11:57:39 compute-0 elastic_sutherland[253448]:     "d56156fb-7361-4bef-b06b-1320109b4323": {
Nov 26 11:57:39 compute-0 elastic_sutherland[253448]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:57:39 compute-0 elastic_sutherland[253448]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 11:57:39 compute-0 elastic_sutherland[253448]:         "osd_id": 2,
Nov 26 11:57:39 compute-0 elastic_sutherland[253448]:         "osd_uuid": "d56156fb-7361-4bef-b06b-1320109b4323",
Nov 26 11:57:39 compute-0 elastic_sutherland[253448]:         "type": "bluestore"
Nov 26 11:57:39 compute-0 elastic_sutherland[253448]:     }
Nov 26 11:57:39 compute-0 elastic_sutherland[253448]: }
Nov 26 11:57:39 compute-0 systemd[1]: libpod-afaec639d63895bd2eb8ff10fcababb464bb050234e404d0686be73bc73bc288.scope: Deactivated successfully.
Nov 26 11:57:39 compute-0 podman[253434]: 2025-11-26 11:57:39.406471169 +0000 UTC m=+0.834795383 container died afaec639d63895bd2eb8ff10fcababb464bb050234e404d0686be73bc73bc288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_sutherland, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:57:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-4710b6bf79fb3dfa3cb47e95511cc0ba630cd5b99f7c2debf5dc38347dd71ac2-merged.mount: Deactivated successfully.
Nov 26 11:57:39 compute-0 podman[253434]: 2025-11-26 11:57:39.43444488 +0000 UTC m=+0.862769104 container remove afaec639d63895bd2eb8ff10fcababb464bb050234e404d0686be73bc73bc288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_sutherland, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:57:39 compute-0 systemd[1]: libpod-conmon-afaec639d63895bd2eb8ff10fcababb464bb050234e404d0686be73bc73bc288.scope: Deactivated successfully.
Nov 26 11:57:39 compute-0 sudo[253342]: pam_unix(sudo:session): session closed for user root
Nov 26 11:57:39 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 11:57:39 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:57:39 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 11:57:39 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:57:39 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev de02ff20-5370-4811-83f2-f2ea1fbd0336 does not exist
Nov 26 11:57:39 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 6992416c-e496-453f-a093-4d4d240721c3 does not exist
Nov 26 11:57:39 compute-0 sudo[253492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:57:39 compute-0 sudo[253492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:57:39 compute-0 sudo[253492]: pam_unix(sudo:session): session closed for user root
Nov 26 11:57:39 compute-0 sudo[253517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 26 11:57:39 compute-0 sudo[253517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:57:39 compute-0 sudo[253517]: pam_unix(sudo:session): session closed for user root
Nov 26 11:57:39 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v673: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:57:40 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:57:40 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:57:40 compute-0 ceph-mon[74928]: pgmap v673: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:57:41 compute-0 ceph-mgr[75197]: [balancer INFO root] Optimize plan auto_2025-11-26_11:57:41
Nov 26 11:57:41 compute-0 ceph-mgr[75197]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 11:57:41 compute-0 ceph-mgr[75197]: [balancer INFO root] do_upmap
Nov 26 11:57:41 compute-0 ceph-mgr[75197]: [balancer INFO root] pools ['.mgr', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.log', 'backups', 'images', 'volumes', 'default.rgw.meta', 'default.rgw.control']
Nov 26 11:57:41 compute-0 ceph-mgr[75197]: [balancer INFO root] prepared 0/10 changes
Nov 26 11:57:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:57:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:57:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:57:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:57:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:57:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:57:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 11:57:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 11:57:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 11:57:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 11:57:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 11:57:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 11:57:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 11:57:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 11:57:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 11:57:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 11:57:41 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:57:41 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v674: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:57:42 compute-0 podman[253542]: 2025-11-26 11:57:42.617664798 +0000 UTC m=+0.042261219 container health_status b46e4f533fc070f3c487ba2bec68d1eecba141ffd4a1dcc9b4c347110dd51e0e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 26 11:57:42 compute-0 ceph-mon[74928]: pgmap v674: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:57:43 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v675: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:57:44 compute-0 ceph-mon[74928]: pgmap v675: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:57:45 compute-0 podman[253559]: 2025-11-26 11:57:45.608418625 +0000 UTC m=+0.033388259 container health_status 5f401b83ca5f86ed33da7d010b1da1561564f07ce95b2e756c93a36d59e54803 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:57:45 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v676: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:57:46 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:57:46 compute-0 ceph-mon[74928]: pgmap v676: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:57:47 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v677: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:57:48 compute-0 ceph-mon[74928]: pgmap v677: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:57:49 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v678: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:57:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 11:57:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:57:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 11:57:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:57:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:57:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:57:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:57:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:57:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:57:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:57:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:57:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:57:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 26 11:57:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:57:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:57:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:57:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 11:57:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:57:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 11:57:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:57:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:57:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:57:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 11:57:50 compute-0 ceph-mon[74928]: pgmap v678: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:57:51 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:57:51 compute-0 ceph-mon[74928]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Nov 26 11:57:51 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:57:51.627616) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 26 11:57:51 compute-0 ceph-mon[74928]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Nov 26 11:57:51 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764158271627659, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1429, "num_deletes": 505, "total_data_size": 1804671, "memory_usage": 1838432, "flush_reason": "Manual Compaction"}
Nov 26 11:57:51 compute-0 ceph-mon[74928]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Nov 26 11:57:51 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764158271631736, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 1776976, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13466, "largest_seqno": 14894, "table_properties": {"data_size": 1770656, "index_size": 3075, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 15597, "raw_average_key_size": 18, "raw_value_size": 1756085, "raw_average_value_size": 2044, "num_data_blocks": 141, "num_entries": 859, "num_filter_entries": 859, "num_deletions": 505, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764158152, "oldest_key_time": 1764158152, "file_creation_time": 1764158271, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "363c2a1d-8d28-40b7-a8ff-7233f1c9b7d5", "db_session_id": "CJT49RLFB1C6KNYXG0ER", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Nov 26 11:57:51 compute-0 ceph-mon[74928]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 4140 microseconds, and 3160 cpu microseconds.
Nov 26 11:57:51 compute-0 ceph-mon[74928]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 11:57:51 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:57:51.631758) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 1776976 bytes OK
Nov 26 11:57:51 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:57:51.631767) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Nov 26 11:57:51 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:57:51.632041) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Nov 26 11:57:51 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:57:51.632050) EVENT_LOG_v1 {"time_micros": 1764158271632047, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 26 11:57:51 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:57:51.632058) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 26 11:57:51 compute-0 ceph-mon[74928]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 1797259, prev total WAL file size 1797259, number of live WAL files 2.
Nov 26 11:57:51 compute-0 ceph-mon[74928]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 11:57:51 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:57:51.632464) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323531' seq:0, type:0; will stop at (end)
Nov 26 11:57:51 compute-0 ceph-mon[74928]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 26 11:57:51 compute-0 ceph-mon[74928]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(1735KB)], [32(7391KB)]
Nov 26 11:57:51 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764158271632490, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 9346133, "oldest_snapshot_seqno": -1}
Nov 26 11:57:51 compute-0 ceph-mon[74928]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 3831 keys, 7317909 bytes, temperature: kUnknown
Nov 26 11:57:51 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764158271646390, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 7317909, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7290470, "index_size": 16769, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9605, "raw_key_size": 93847, "raw_average_key_size": 24, "raw_value_size": 7219320, "raw_average_value_size": 1884, "num_data_blocks": 712, "num_entries": 3831, "num_filter_entries": 3831, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764157079, "oldest_key_time": 0, "file_creation_time": 1764158271, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "363c2a1d-8d28-40b7-a8ff-7233f1c9b7d5", "db_session_id": "CJT49RLFB1C6KNYXG0ER", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 26 11:57:51 compute-0 ceph-mon[74928]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 11:57:51 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:57:51.646500) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 7317909 bytes
Nov 26 11:57:51 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:57:51.646800) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 670.8 rd, 525.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 7.2 +0.0 blob) out(7.0 +0.0 blob), read-write-amplify(9.4) write-amplify(4.1) OK, records in: 4854, records dropped: 1023 output_compression: NoCompression
Nov 26 11:57:51 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:57:51.646816) EVENT_LOG_v1 {"time_micros": 1764158271646811, "job": 14, "event": "compaction_finished", "compaction_time_micros": 13933, "compaction_time_cpu_micros": 11618, "output_level": 6, "num_output_files": 1, "total_output_size": 7317909, "num_input_records": 4854, "num_output_records": 3831, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 26 11:57:51 compute-0 ceph-mon[74928]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 11:57:51 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764158271647039, "job": 14, "event": "table_file_deletion", "file_number": 34}
Nov 26 11:57:51 compute-0 ceph-mon[74928]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 11:57:51 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764158271647864, "job": 14, "event": "table_file_deletion", "file_number": 32}
Nov 26 11:57:51 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:57:51.632396) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 11:57:51 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:57:51.647881) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 11:57:51 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:57:51.647883) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 11:57:51 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:57:51.647884) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 11:57:51 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:57:51.647885) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 11:57:51 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/11/26-11:57:51.647886) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 11:57:51 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v679: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:57:52 compute-0 ceph-mon[74928]: pgmap v679: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:57:53 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v680: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:57:54 compute-0 ceph-mon[74928]: pgmap v680: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:57:55 compute-0 podman[253575]: 2025-11-26 11:57:55.630392932 +0000 UTC m=+0.050952996 container health_status cfbeb1268b73bea59211ec8c025ee3818e11127880f6c0675b5fbea39bdd577e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:57:55 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v681: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:57:56 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:57:56 compute-0 ceph-mon[74928]: pgmap v681: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:57:57 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v682: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:57:58 compute-0 ceph-mon[74928]: pgmap v682: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:57:59 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v683: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:58:00 compute-0 ceph-mon[74928]: pgmap v683: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:58:01 compute-0 ceph-mon[74928]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 11:58:01 compute-0 ceph-mon[74928]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 3304 writes, 14K keys, 3304 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 3304 writes, 3304 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1294 writes, 5884 keys, 1294 commit groups, 1.0 writes per commit group, ingest: 8.56 MB, 0.01 MB/s
                                           Interval WAL: 1294 writes, 1294 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    434.0      0.04              0.03         7    0.005       0      0       0.0       0.0
                                             L6      1/0    6.98 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6    579.6    474.9      0.09              0.07         6    0.015     24K   3201       0.0       0.0
                                            Sum      1/0    6.98 MB   0.0      0.0     0.0      0.0       0.1      0.0       0.0   3.6    406.8    462.7      0.12              0.10        13    0.010     24K   3201       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.7    450.4    455.9      0.08              0.06         8    0.010     17K   2472       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    579.6    474.9      0.09              0.07         6    0.015     24K   3201       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    442.4      0.04              0.03         6    0.006       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     62.5      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.016, interval 0.007
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.06 GB write, 0.05 MB/s write, 0.05 GB read, 0.04 MB/s read, 0.1 seconds
                                           Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.06 MB/s read, 0.1 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557bd53f31f0#2 capacity: 308.00 MB usage: 1.62 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(97,1.40 MB,0.455346%) FilterBlock(14,75.55 KB,0.0239533%) IndexBlock(14,149.78 KB,0.0474905%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 26 11:58:01 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:58:01 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v684: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:58:02 compute-0 ceph-mon[74928]: pgmap v684: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:58:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:58:02.989 159928 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 11:58:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:58:02.990 159928 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 11:58:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:58:02.990 159928 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 11:58:03 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v685: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:58:04 compute-0 ceph-mon[74928]: pgmap v685: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:58:05 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v686: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:58:06 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:58:06 compute-0 ceph-mon[74928]: pgmap v686: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:58:07 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v687: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:58:08 compute-0 ceph-mon[74928]: pgmap v687: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:58:09 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v688: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:58:10 compute-0 ceph-mon[74928]: pgmap v688: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:58:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:58:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:58:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:58:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:58:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:58:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:58:11 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:58:11 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v689: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:58:12 compute-0 ceph-mon[74928]: pgmap v689: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:58:13 compute-0 podman[253598]: 2025-11-26 11:58:13.622183961 +0000 UTC m=+0.043071496 container health_status b46e4f533fc070f3c487ba2bec68d1eecba141ffd4a1dcc9b4c347110dd51e0e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 26 11:58:13 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v690: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:58:14 compute-0 ceph-mon[74928]: pgmap v690: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:58:15 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v691: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:58:16 compute-0 podman[253616]: 2025-11-26 11:58:16.608926777 +0000 UTC m=+0.034005524 container health_status 5f401b83ca5f86ed33da7d010b1da1561564f07ce95b2e756c93a36d59e54803 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent)
Nov 26 11:58:16 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:58:16 compute-0 ceph-mon[74928]: pgmap v691: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:58:17 compute-0 nova_compute[248203]: 2025-11-26 11:58:17.625 248207 DEBUG oslo_service.periodic_task [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 11:58:17 compute-0 nova_compute[248203]: 2025-11-26 11:58:17.625 248207 DEBUG nova.compute.manager [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 26 11:58:17 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v692: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:58:18 compute-0 ceph-mon[74928]: pgmap v692: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:58:19 compute-0 nova_compute[248203]: 2025-11-26 11:58:19.621 248207 DEBUG oslo_service.periodic_task [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 11:58:19 compute-0 nova_compute[248203]: 2025-11-26 11:58:19.622 248207 DEBUG oslo_service.periodic_task [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 11:58:19 compute-0 nova_compute[248203]: 2025-11-26 11:58:19.639 248207 DEBUG oslo_service.periodic_task [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 11:58:19 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v693: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:58:20 compute-0 ceph-mon[74928]: pgmap v693: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:58:21 compute-0 nova_compute[248203]: 2025-11-26 11:58:21.626 248207 DEBUG oslo_service.periodic_task [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 11:58:21 compute-0 nova_compute[248203]: 2025-11-26 11:58:21.627 248207 DEBUG nova.compute.manager [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 26 11:58:21 compute-0 nova_compute[248203]: 2025-11-26 11:58:21.627 248207 DEBUG nova.compute.manager [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 26 11:58:21 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:58:21 compute-0 nova_compute[248203]: 2025-11-26 11:58:21.639 248207 DEBUG nova.compute.manager [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 26 11:58:21 compute-0 nova_compute[248203]: 2025-11-26 11:58:21.639 248207 DEBUG oslo_service.periodic_task [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 11:58:21 compute-0 nova_compute[248203]: 2025-11-26 11:58:21.640 248207 DEBUG oslo_service.periodic_task [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 11:58:21 compute-0 nova_compute[248203]: 2025-11-26 11:58:21.640 248207 DEBUG oslo_service.periodic_task [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 11:58:21 compute-0 nova_compute[248203]: 2025-11-26 11:58:21.640 248207 DEBUG oslo_service.periodic_task [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 11:58:21 compute-0 nova_compute[248203]: 2025-11-26 11:58:21.662 248207 DEBUG oslo_concurrency.lockutils [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 11:58:21 compute-0 nova_compute[248203]: 2025-11-26 11:58:21.662 248207 DEBUG oslo_concurrency.lockutils [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 11:58:21 compute-0 nova_compute[248203]: 2025-11-26 11:58:21.663 248207 DEBUG oslo_concurrency.lockutils [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 11:58:21 compute-0 nova_compute[248203]: 2025-11-26 11:58:21.663 248207 DEBUG nova.compute.resource_tracker [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 26 11:58:21 compute-0 nova_compute[248203]: 2025-11-26 11:58:21.663 248207 DEBUG oslo_concurrency.processutils [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 26 11:58:21 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v694: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:58:21 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 11:58:21 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/377606276' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 11:58:21 compute-0 nova_compute[248203]: 2025-11-26 11:58:21.990 248207 DEBUG oslo_concurrency.processutils [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.327s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 26 11:58:22 compute-0 nova_compute[248203]: 2025-11-26 11:58:22.182 248207 WARNING nova.virt.libvirt.driver [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 26 11:58:22 compute-0 nova_compute[248203]: 2025-11-26 11:58:22.183 248207 DEBUG nova.compute.resource_tracker [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5219MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 26 11:58:22 compute-0 nova_compute[248203]: 2025-11-26 11:58:22.183 248207 DEBUG oslo_concurrency.lockutils [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 11:58:22 compute-0 nova_compute[248203]: 2025-11-26 11:58:22.183 248207 DEBUG oslo_concurrency.lockutils [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 11:58:22 compute-0 nova_compute[248203]: 2025-11-26 11:58:22.228 248207 DEBUG nova.compute.resource_tracker [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 26 11:58:22 compute-0 nova_compute[248203]: 2025-11-26 11:58:22.228 248207 DEBUG nova.compute.resource_tracker [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 26 11:58:22 compute-0 nova_compute[248203]: 2025-11-26 11:58:22.243 248207 DEBUG oslo_concurrency.processutils [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 26 11:58:22 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 11:58:22 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3480956653' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 11:58:22 compute-0 nova_compute[248203]: 2025-11-26 11:58:22.567 248207 DEBUG oslo_concurrency.processutils [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.324s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 26 11:58:22 compute-0 nova_compute[248203]: 2025-11-26 11:58:22.570 248207 DEBUG nova.compute.provider_tree [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Inventory has not changed in ProviderTree for provider: ffdf5b8d-24ca-43b0-a64a-b7345874e7b4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 26 11:58:22 compute-0 nova_compute[248203]: 2025-11-26 11:58:22.584 248207 DEBUG nova.scheduler.client.report [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Inventory has not changed for provider ffdf5b8d-24ca-43b0-a64a-b7345874e7b4 based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 26 11:58:22 compute-0 nova_compute[248203]: 2025-11-26 11:58:22.585 248207 DEBUG nova.compute.resource_tracker [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 26 11:58:22 compute-0 nova_compute[248203]: 2025-11-26 11:58:22.585 248207 DEBUG oslo_concurrency.lockutils [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.401s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 11:58:22 compute-0 ceph-mon[74928]: pgmap v694: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:58:22 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/377606276' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 11:58:22 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3480956653' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 11:58:23 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v695: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:58:24 compute-0 nova_compute[248203]: 2025-11-26 11:58:24.570 248207 DEBUG oslo_service.periodic_task [None req-63323834-9c97-4c28-a81f-caf78026c336 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 11:58:24 compute-0 ceph-mon[74928]: pgmap v695: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:58:25 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v696: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:58:26 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:58:26 compute-0 podman[253676]: 2025-11-26 11:58:26.633321537 +0000 UTC m=+0.058993845 container health_status cfbeb1268b73bea59211ec8c025ee3818e11127880f6c0675b5fbea39bdd577e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 26 11:58:26 compute-0 ceph-mon[74928]: pgmap v696: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:58:27 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v697: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:58:28 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 11:58:28 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1875733352' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 11:58:28 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 11:58:28 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1875733352' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 11:58:28 compute-0 ceph-mon[74928]: pgmap v697: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:58:28 compute-0 ceph-mon[74928]: from='client.? 192.168.122.10:0/1875733352' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 11:58:28 compute-0 ceph-mon[74928]: from='client.? 192.168.122.10:0/1875733352' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 11:58:29 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v698: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:58:30 compute-0 ceph-mon[74928]: pgmap v698: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:58:31 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:58:31 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v699: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:58:32 compute-0 ceph-mon[74928]: pgmap v699: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:58:33 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v700: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:58:34 compute-0 ceph-mon[74928]: pgmap v700: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:58:35 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v701: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:58:36 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:58:36 compute-0 ceph-mon[74928]: pgmap v701: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:58:37 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v702: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:58:38 compute-0 ceph-mon[74928]: pgmap v702: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:58:39 compute-0 sudo[253699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:58:39 compute-0 sudo[253699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:58:39 compute-0 sudo[253699]: pam_unix(sudo:session): session closed for user root
Nov 26 11:58:39 compute-0 sudo[253724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:58:39 compute-0 sudo[253724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:58:39 compute-0 sudo[253724]: pam_unix(sudo:session): session closed for user root
Nov 26 11:58:39 compute-0 sudo[253749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:58:39 compute-0 sudo[253749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:58:39 compute-0 sudo[253749]: pam_unix(sudo:session): session closed for user root
Nov 26 11:58:39 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v703: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:58:39 compute-0 sudo[253774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 26 11:58:39 compute-0 sudo[253774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:58:40 compute-0 sudo[253774]: pam_unix(sudo:session): session closed for user root
Nov 26 11:58:40 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 11:58:40 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:58:40 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 11:58:40 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 11:58:40 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 11:58:40 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:58:40 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 27339fe3-f3b7-4cfe-b182-fc4c16353e40 does not exist
Nov 26 11:58:40 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 863d178b-e139-49fe-94e6-bf520d1d5408 does not exist
Nov 26 11:58:40 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 126c0622-c44e-47d0-93a0-268951f4567b does not exist
Nov 26 11:58:40 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 11:58:40 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 11:58:40 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 11:58:40 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 11:58:40 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 11:58:40 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:58:40 compute-0 sudo[253828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:58:40 compute-0 sudo[253828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:58:40 compute-0 sudo[253828]: pam_unix(sudo:session): session closed for user root
Nov 26 11:58:40 compute-0 sudo[253853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:58:40 compute-0 sudo[253853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:58:40 compute-0 sudo[253853]: pam_unix(sudo:session): session closed for user root
Nov 26 11:58:40 compute-0 sudo[253878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:58:40 compute-0 sudo[253878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:58:40 compute-0 sudo[253878]: pam_unix(sudo:session): session closed for user root
Nov 26 11:58:40 compute-0 sudo[253903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 26 11:58:40 compute-0 sudo[253903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:58:40 compute-0 podman[253959]: 2025-11-26 11:58:40.449766982 +0000 UTC m=+0.026573031 container create e9a56be80441d93d56c0f5d6a7ac796d319a16ede9c838a187d20a2106d2314c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_swartz, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 11:58:40 compute-0 systemd[1]: Started libpod-conmon-e9a56be80441d93d56c0f5d6a7ac796d319a16ede9c838a187d20a2106d2314c.scope.
Nov 26 11:58:40 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:58:40 compute-0 podman[253959]: 2025-11-26 11:58:40.502590253 +0000 UTC m=+0.079396322 container init e9a56be80441d93d56c0f5d6a7ac796d319a16ede9c838a187d20a2106d2314c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:58:40 compute-0 podman[253959]: 2025-11-26 11:58:40.507166761 +0000 UTC m=+0.083972810 container start e9a56be80441d93d56c0f5d6a7ac796d319a16ede9c838a187d20a2106d2314c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_swartz, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 11:58:40 compute-0 podman[253959]: 2025-11-26 11:58:40.508406118 +0000 UTC m=+0.085212167 container attach e9a56be80441d93d56c0f5d6a7ac796d319a16ede9c838a187d20a2106d2314c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_swartz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:58:40 compute-0 trusting_swartz[253972]: 167 167
Nov 26 11:58:40 compute-0 systemd[1]: libpod-e9a56be80441d93d56c0f5d6a7ac796d319a16ede9c838a187d20a2106d2314c.scope: Deactivated successfully.
Nov 26 11:58:40 compute-0 podman[253959]: 2025-11-26 11:58:40.511492117 +0000 UTC m=+0.088298166 container died e9a56be80441d93d56c0f5d6a7ac796d319a16ede9c838a187d20a2106d2314c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 26 11:58:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-d9c94959b358b1df7a0ac9b13d55a84d514ac2671565d3d2d7ac6e8f07d8937d-merged.mount: Deactivated successfully.
Nov 26 11:58:40 compute-0 podman[253959]: 2025-11-26 11:58:40.530591385 +0000 UTC m=+0.107397434 container remove e9a56be80441d93d56c0f5d6a7ac796d319a16ede9c838a187d20a2106d2314c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True)
Nov 26 11:58:40 compute-0 podman[253959]: 2025-11-26 11:58:40.438893832 +0000 UTC m=+0.015699880 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:58:40 compute-0 systemd[1]: libpod-conmon-e9a56be80441d93d56c0f5d6a7ac796d319a16ede9c838a187d20a2106d2314c.scope: Deactivated successfully.
Nov 26 11:58:40 compute-0 podman[253994]: 2025-11-26 11:58:40.643357781 +0000 UTC m=+0.025217676 container create b8655543a3caadad0082c76df6cb643e0570897eb2ec2e286d961b6474e76f2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_cerf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 26 11:58:40 compute-0 systemd[1]: Started libpod-conmon-b8655543a3caadad0082c76df6cb643e0570897eb2ec2e286d961b6474e76f2c.scope.
Nov 26 11:58:40 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:58:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26660b74a69fc55132e4b33d46bc3f2c305be8040b085d00366c208e22d0d04e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:58:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26660b74a69fc55132e4b33d46bc3f2c305be8040b085d00366c208e22d0d04e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:58:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26660b74a69fc55132e4b33d46bc3f2c305be8040b085d00366c208e22d0d04e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:58:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26660b74a69fc55132e4b33d46bc3f2c305be8040b085d00366c208e22d0d04e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:58:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26660b74a69fc55132e4b33d46bc3f2c305be8040b085d00366c208e22d0d04e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 11:58:40 compute-0 podman[253994]: 2025-11-26 11:58:40.697942052 +0000 UTC m=+0.079801938 container init b8655543a3caadad0082c76df6cb643e0570897eb2ec2e286d961b6474e76f2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_cerf, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:58:40 compute-0 podman[253994]: 2025-11-26 11:58:40.703454886 +0000 UTC m=+0.085314772 container start b8655543a3caadad0082c76df6cb643e0570897eb2ec2e286d961b6474e76f2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_cerf, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 26 11:58:40 compute-0 podman[253994]: 2025-11-26 11:58:40.704791426 +0000 UTC m=+0.086651310 container attach b8655543a3caadad0082c76df6cb643e0570897eb2ec2e286d961b6474e76f2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_cerf, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:58:40 compute-0 podman[253994]: 2025-11-26 11:58:40.633483835 +0000 UTC m=+0.015343740 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:58:40 compute-0 ceph-mon[74928]: pgmap v703: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:58:40 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:58:40 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 11:58:40 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:58:40 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 11:58:40 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 11:58:40 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:58:41 compute-0 ceph-mgr[75197]: [balancer INFO root] Optimize plan auto_2025-11-26_11:58:41
Nov 26 11:58:41 compute-0 ceph-mgr[75197]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 11:58:41 compute-0 ceph-mgr[75197]: [balancer INFO root] do_upmap
Nov 26 11:58:41 compute-0 ceph-mgr[75197]: [balancer INFO root] pools ['backups', 'vms', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', 'images', 'volumes', 'default.rgw.meta', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.data']
Nov 26 11:58:41 compute-0 ceph-mgr[75197]: [balancer INFO root] prepared 0/10 changes
Nov 26 11:58:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:58:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:58:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:58:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:58:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:58:41 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:58:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 11:58:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 11:58:41 compute-0 eager_cerf[254008]: --> passed data devices: 0 physical, 3 LVM
Nov 26 11:58:41 compute-0 eager_cerf[254008]: --> relative data size: 1.0
Nov 26 11:58:41 compute-0 eager_cerf[254008]: --> All data devices are unavailable
Nov 26 11:58:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 11:58:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 11:58:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 11:58:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 11:58:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 11:58:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 11:58:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 11:58:41 compute-0 ceph-mgr[75197]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 11:58:41 compute-0 systemd[1]: libpod-b8655543a3caadad0082c76df6cb643e0570897eb2ec2e286d961b6474e76f2c.scope: Deactivated successfully.
Nov 26 11:58:41 compute-0 podman[254037]: 2025-11-26 11:58:41.542835323 +0000 UTC m=+0.017348489 container died b8655543a3caadad0082c76df6cb643e0570897eb2ec2e286d961b6474e76f2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:58:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-26660b74a69fc55132e4b33d46bc3f2c305be8040b085d00366c208e22d0d04e-merged.mount: Deactivated successfully.
Nov 26 11:58:41 compute-0 podman[254037]: 2025-11-26 11:58:41.571876477 +0000 UTC m=+0.046389632 container remove b8655543a3caadad0082c76df6cb643e0570897eb2ec2e286d961b6474e76f2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_cerf, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Nov 26 11:58:41 compute-0 systemd[1]: libpod-conmon-b8655543a3caadad0082c76df6cb643e0570897eb2ec2e286d961b6474e76f2c.scope: Deactivated successfully.
Nov 26 11:58:41 compute-0 sudo[253903]: pam_unix(sudo:session): session closed for user root
Nov 26 11:58:41 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:58:41 compute-0 sudo[254048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:58:41 compute-0 sudo[254048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:58:41 compute-0 sudo[254048]: pam_unix(sudo:session): session closed for user root
Nov 26 11:58:41 compute-0 sudo[254073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:58:41 compute-0 sudo[254073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:58:41 compute-0 sudo[254073]: pam_unix(sudo:session): session closed for user root
Nov 26 11:58:41 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v704: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:58:41 compute-0 sudo[254098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:58:41 compute-0 sudo[254098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:58:41 compute-0 sudo[254098]: pam_unix(sudo:session): session closed for user root
Nov 26 11:58:41 compute-0 sudo[254123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -- lvm list --format json
Nov 26 11:58:41 compute-0 sudo[254123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:58:41 compute-0 podman[254179]: 2025-11-26 11:58:41.984773347 +0000 UTC m=+0.024668871 container create 4c18e4be7031e023cf1abbacd6895762d00c15254471940b9b79843112f42fa1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_torvalds, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:58:42 compute-0 systemd[1]: Started libpod-conmon-4c18e4be7031e023cf1abbacd6895762d00c15254471940b9b79843112f42fa1.scope.
Nov 26 11:58:42 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:58:42 compute-0 podman[254179]: 2025-11-26 11:58:42.027796401 +0000 UTC m=+0.067691945 container init 4c18e4be7031e023cf1abbacd6895762d00c15254471940b9b79843112f42fa1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_torvalds, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 26 11:58:42 compute-0 podman[254179]: 2025-11-26 11:58:42.032413737 +0000 UTC m=+0.072309260 container start 4c18e4be7031e023cf1abbacd6895762d00c15254471940b9b79843112f42fa1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_torvalds, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:58:42 compute-0 podman[254179]: 2025-11-26 11:58:42.033499664 +0000 UTC m=+0.073395187 container attach 4c18e4be7031e023cf1abbacd6895762d00c15254471940b9b79843112f42fa1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:58:42 compute-0 angry_torvalds[254193]: 167 167
Nov 26 11:58:42 compute-0 podman[254179]: 2025-11-26 11:58:42.034962973 +0000 UTC m=+0.074858506 container died 4c18e4be7031e023cf1abbacd6895762d00c15254471940b9b79843112f42fa1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 26 11:58:42 compute-0 systemd[1]: libpod-4c18e4be7031e023cf1abbacd6895762d00c15254471940b9b79843112f42fa1.scope: Deactivated successfully.
Nov 26 11:58:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-10eb3365b70e443a02879187f35f22e3bed4b78338a1b076bfc6606200a54521-merged.mount: Deactivated successfully.
Nov 26 11:58:42 compute-0 podman[254179]: 2025-11-26 11:58:42.049082453 +0000 UTC m=+0.088977976 container remove 4c18e4be7031e023cf1abbacd6895762d00c15254471940b9b79843112f42fa1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_torvalds, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 26 11:58:42 compute-0 podman[254179]: 2025-11-26 11:58:41.974892177 +0000 UTC m=+0.014787721 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:58:42 compute-0 systemd[1]: libpod-conmon-4c18e4be7031e023cf1abbacd6895762d00c15254471940b9b79843112f42fa1.scope: Deactivated successfully.
Nov 26 11:58:42 compute-0 podman[254215]: 2025-11-26 11:58:42.163205015 +0000 UTC m=+0.028257275 container create 176394c61e13a843728fc147a9ae4711fb4378d4ee129975a328b40bca93bc92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_mestorf, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 26 11:58:42 compute-0 systemd[1]: Started libpod-conmon-176394c61e13a843728fc147a9ae4711fb4378d4ee129975a328b40bca93bc92.scope.
Nov 26 11:58:42 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:58:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/340cb3e8a88c8e76ded1dc94f107f15740c690da0006a0edca91cfe6203cf347/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:58:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/340cb3e8a88c8e76ded1dc94f107f15740c690da0006a0edca91cfe6203cf347/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:58:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/340cb3e8a88c8e76ded1dc94f107f15740c690da0006a0edca91cfe6203cf347/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:58:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/340cb3e8a88c8e76ded1dc94f107f15740c690da0006a0edca91cfe6203cf347/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:58:42 compute-0 podman[254215]: 2025-11-26 11:58:42.223723395 +0000 UTC m=+0.088775676 container init 176394c61e13a843728fc147a9ae4711fb4378d4ee129975a328b40bca93bc92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 26 11:58:42 compute-0 podman[254215]: 2025-11-26 11:58:42.22869577 +0000 UTC m=+0.093748030 container start 176394c61e13a843728fc147a9ae4711fb4378d4ee129975a328b40bca93bc92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 26 11:58:42 compute-0 podman[254215]: 2025-11-26 11:58:42.229657382 +0000 UTC m=+0.094709642 container attach 176394c61e13a843728fc147a9ae4711fb4378d4ee129975a328b40bca93bc92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 26 11:58:42 compute-0 podman[254215]: 2025-11-26 11:58:42.151890884 +0000 UTC m=+0.016943165 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:58:42 compute-0 ceph-mon[74928]: pgmap v704: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]: {
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:     "0": [
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:         {
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:             "devices": [
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:                 "/dev/loop3"
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:             ],
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:             "lv_name": "ceph_lv0",
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:             "lv_size": "21470642176",
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a9ad59a0-aa2e-4d92-b571-519d2d145b6a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:             "lv_uuid": "Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ",
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:             "name": "ceph_lv0",
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:             "tags": {
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:                 "ceph.block_uuid": "Tr0877-9pM0-DAiK-SenW-jaZo-POhq-lXaSkZ",
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:                 "ceph.cluster_name": "ceph",
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:                 "ceph.crush_device_class": "",
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:                 "ceph.encrypted": "0",
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:                 "ceph.osd_fsid": "a9ad59a0-aa2e-4d92-b571-519d2d145b6a",
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:                 "ceph.osd_id": "0",
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:                 "ceph.type": "block",
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:                 "ceph.vdo": "0"
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:             },
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:             "type": "block",
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:             "vg_name": "ceph_vg0"
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:         }
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:     ],
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:     "1": [
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:         {
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:             "devices": [
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:                 "/dev/loop4"
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:             ],
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:             "lv_name": "ceph_lv1",
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:             "lv_size": "21470642176",
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2627095b-eef8-4027-bfef-68bf7cb6801f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:             "lv_uuid": "T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS",
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:             "name": "ceph_lv1",
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:             "tags": {
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:                 "ceph.block_uuid": "T7Oor2-FJG7-0JEP-A5Ul-DinW-vliY-t11uNS",
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:                 "ceph.cluster_name": "ceph",
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:                 "ceph.crush_device_class": "",
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:                 "ceph.encrypted": "0",
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:                 "ceph.osd_fsid": "2627095b-eef8-4027-bfef-68bf7cb6801f",
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:                 "ceph.osd_id": "1",
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:                 "ceph.type": "block",
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:                 "ceph.vdo": "0"
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:             },
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:             "type": "block",
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:             "vg_name": "ceph_vg1"
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:         }
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:     ],
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:     "2": [
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:         {
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:             "devices": [
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:                 "/dev/loop5"
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:             ],
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:             "lv_name": "ceph_lv2",
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:             "lv_size": "21470642176",
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ebab460c-3fd7-5f66-aa87-e10c143123f7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d56156fb-7361-4bef-b06b-1320109b4323,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:             "lv_uuid": "PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF",
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:             "name": "ceph_lv2",
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:             "tags": {
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:                 "ceph.block_uuid": "PSpn9X-FAj2-fQsd-sAjf-WqHR-sYOz-Tr9osF",
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:                 "ceph.cluster_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:                 "ceph.cluster_name": "ceph",
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:                 "ceph.crush_device_class": "",
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:                 "ceph.encrypted": "0",
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:                 "ceph.osd_fsid": "d56156fb-7361-4bef-b06b-1320109b4323",
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:                 "ceph.osd_id": "2",
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:                 "ceph.type": "block",
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:                 "ceph.vdo": "0"
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:             },
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:             "type": "block",
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:             "vg_name": "ceph_vg2"
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:         }
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]:     ]
Nov 26 11:58:42 compute-0 hungry_mestorf[254228]: }
Nov 26 11:58:42 compute-0 systemd[1]: libpod-176394c61e13a843728fc147a9ae4711fb4378d4ee129975a328b40bca93bc92.scope: Deactivated successfully.
Nov 26 11:58:42 compute-0 podman[254237]: 2025-11-26 11:58:42.908697811 +0000 UTC m=+0.019944273 container died 176394c61e13a843728fc147a9ae4711fb4378d4ee129975a328b40bca93bc92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_mestorf, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 11:58:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-340cb3e8a88c8e76ded1dc94f107f15740c690da0006a0edca91cfe6203cf347-merged.mount: Deactivated successfully.
Nov 26 11:58:42 compute-0 podman[254237]: 2025-11-26 11:58:42.944994653 +0000 UTC m=+0.056241096 container remove 176394c61e13a843728fc147a9ae4711fb4378d4ee129975a328b40bca93bc92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 26 11:58:42 compute-0 systemd[1]: libpod-conmon-176394c61e13a843728fc147a9ae4711fb4378d4ee129975a328b40bca93bc92.scope: Deactivated successfully.
Nov 26 11:58:42 compute-0 sudo[254123]: pam_unix(sudo:session): session closed for user root
Nov 26 11:58:43 compute-0 sudo[254248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:58:43 compute-0 sudo[254248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:58:43 compute-0 sudo[254248]: pam_unix(sudo:session): session closed for user root
Nov 26 11:58:43 compute-0 sudo[254273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 11:58:43 compute-0 sudo[254273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:58:43 compute-0 sudo[254273]: pam_unix(sudo:session): session closed for user root
Nov 26 11:58:43 compute-0 sudo[254298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:58:43 compute-0 sudo[254298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:58:43 compute-0 sudo[254298]: pam_unix(sudo:session): session closed for user root
Nov 26 11:58:43 compute-0 sudo[254323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ebab460c-3fd7-5f66-aa87-e10c143123f7/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ebab460c-3fd7-5f66-aa87-e10c143123f7 -- raw list --format json
Nov 26 11:58:43 compute-0 sudo[254323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:58:43 compute-0 podman[254379]: 2025-11-26 11:58:43.446460339 +0000 UTC m=+0.031092141 container create 6c92b34559915be124e0302599f6ffb9ab88ea92e2bde2e9df022b21ff5776a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_pascal, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 26 11:58:43 compute-0 systemd[1]: Started libpod-conmon-6c92b34559915be124e0302599f6ffb9ab88ea92e2bde2e9df022b21ff5776a8.scope.
Nov 26 11:58:43 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:58:43 compute-0 podman[254379]: 2025-11-26 11:58:43.497218246 +0000 UTC m=+0.081850048 container init 6c92b34559915be124e0302599f6ffb9ab88ea92e2bde2e9df022b21ff5776a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 26 11:58:43 compute-0 podman[254379]: 2025-11-26 11:58:43.501394429 +0000 UTC m=+0.086026221 container start 6c92b34559915be124e0302599f6ffb9ab88ea92e2bde2e9df022b21ff5776a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_pascal, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 11:58:43 compute-0 podman[254379]: 2025-11-26 11:58:43.502600594 +0000 UTC m=+0.087232406 container attach 6c92b34559915be124e0302599f6ffb9ab88ea92e2bde2e9df022b21ff5776a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_pascal, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:58:43 compute-0 lucid_pascal[254392]: 167 167
Nov 26 11:58:43 compute-0 systemd[1]: libpod-6c92b34559915be124e0302599f6ffb9ab88ea92e2bde2e9df022b21ff5776a8.scope: Deactivated successfully.
Nov 26 11:58:43 compute-0 podman[254379]: 2025-11-26 11:58:43.506000454 +0000 UTC m=+0.090632246 container died 6c92b34559915be124e0302599f6ffb9ab88ea92e2bde2e9df022b21ff5776a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_pascal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 26 11:58:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-34021aef31960e7090ba23a9f703d36c9d4950bfdbc15c8d4f9e2883b2181111-merged.mount: Deactivated successfully.
Nov 26 11:58:43 compute-0 podman[254379]: 2025-11-26 11:58:43.523969762 +0000 UTC m=+0.108601553 container remove 6c92b34559915be124e0302599f6ffb9ab88ea92e2bde2e9df022b21ff5776a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 26 11:58:43 compute-0 podman[254379]: 2025-11-26 11:58:43.43378464 +0000 UTC m=+0.018416453 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:58:43 compute-0 systemd[1]: libpod-conmon-6c92b34559915be124e0302599f6ffb9ab88ea92e2bde2e9df022b21ff5776a8.scope: Deactivated successfully.
Nov 26 11:58:43 compute-0 podman[254414]: 2025-11-26 11:58:43.656802841 +0000 UTC m=+0.031889614 container create bf18e844a1172900934b0fd068c13f63b81545ef58959a9245876ea72ff1bf89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:58:43 compute-0 systemd[1]: Started libpod-conmon-bf18e844a1172900934b0fd068c13f63b81545ef58959a9245876ea72ff1bf89.scope.
Nov 26 11:58:43 compute-0 systemd[1]: Started libcrun container.
Nov 26 11:58:43 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v705: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:58:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/197f0b5d1d1cd811b5f0da1b65dafa55a8ab5f086972e7523db839be6ea2c523/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 11:58:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/197f0b5d1d1cd811b5f0da1b65dafa55a8ab5f086972e7523db839be6ea2c523/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 11:58:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/197f0b5d1d1cd811b5f0da1b65dafa55a8ab5f086972e7523db839be6ea2c523/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 11:58:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/197f0b5d1d1cd811b5f0da1b65dafa55a8ab5f086972e7523db839be6ea2c523/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 11:58:43 compute-0 podman[254414]: 2025-11-26 11:58:43.715737083 +0000 UTC m=+0.090823858 container init bf18e844a1172900934b0fd068c13f63b81545ef58959a9245876ea72ff1bf89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 26 11:58:43 compute-0 podman[254414]: 2025-11-26 11:58:43.721606148 +0000 UTC m=+0.096692923 container start bf18e844a1172900934b0fd068c13f63b81545ef58959a9245876ea72ff1bf89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_driscoll, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 26 11:58:43 compute-0 podman[254414]: 2025-11-26 11:58:43.723165698 +0000 UTC m=+0.098252492 container attach bf18e844a1172900934b0fd068c13f63b81545ef58959a9245876ea72ff1bf89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_driscoll, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 11:58:43 compute-0 podman[254424]: 2025-11-26 11:58:43.738323377 +0000 UTC m=+0.056626322 container health_status b46e4f533fc070f3c487ba2bec68d1eecba141ffd4a1dcc9b4c347110dd51e0e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, managed_by=edpm_ansible)
Nov 26 11:58:43 compute-0 podman[254414]: 2025-11-26 11:58:43.643075159 +0000 UTC m=+0.018161933 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 11:58:44 compute-0 epic_driscoll[254428]: {
Nov 26 11:58:44 compute-0 epic_driscoll[254428]:     "2627095b-eef8-4027-bfef-68bf7cb6801f": {
Nov 26 11:58:44 compute-0 epic_driscoll[254428]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:58:44 compute-0 epic_driscoll[254428]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 11:58:44 compute-0 epic_driscoll[254428]:         "osd_id": 1,
Nov 26 11:58:44 compute-0 epic_driscoll[254428]:         "osd_uuid": "2627095b-eef8-4027-bfef-68bf7cb6801f",
Nov 26 11:58:44 compute-0 epic_driscoll[254428]:         "type": "bluestore"
Nov 26 11:58:44 compute-0 epic_driscoll[254428]:     },
Nov 26 11:58:44 compute-0 epic_driscoll[254428]:     "a9ad59a0-aa2e-4d92-b571-519d2d145b6a": {
Nov 26 11:58:44 compute-0 epic_driscoll[254428]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:58:44 compute-0 epic_driscoll[254428]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 11:58:44 compute-0 epic_driscoll[254428]:         "osd_id": 0,
Nov 26 11:58:44 compute-0 epic_driscoll[254428]:         "osd_uuid": "a9ad59a0-aa2e-4d92-b571-519d2d145b6a",
Nov 26 11:58:44 compute-0 epic_driscoll[254428]:         "type": "bluestore"
Nov 26 11:58:44 compute-0 epic_driscoll[254428]:     },
Nov 26 11:58:44 compute-0 epic_driscoll[254428]:     "d56156fb-7361-4bef-b06b-1320109b4323": {
Nov 26 11:58:44 compute-0 epic_driscoll[254428]:         "ceph_fsid": "ebab460c-3fd7-5f66-aa87-e10c143123f7",
Nov 26 11:58:44 compute-0 epic_driscoll[254428]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 11:58:44 compute-0 epic_driscoll[254428]:         "osd_id": 2,
Nov 26 11:58:44 compute-0 epic_driscoll[254428]:         "osd_uuid": "d56156fb-7361-4bef-b06b-1320109b4323",
Nov 26 11:58:44 compute-0 epic_driscoll[254428]:         "type": "bluestore"
Nov 26 11:58:44 compute-0 epic_driscoll[254428]:     }
Nov 26 11:58:44 compute-0 epic_driscoll[254428]: }
Nov 26 11:58:44 compute-0 systemd[1]: libpod-bf18e844a1172900934b0fd068c13f63b81545ef58959a9245876ea72ff1bf89.scope: Deactivated successfully.
Nov 26 11:58:44 compute-0 podman[254477]: 2025-11-26 11:58:44.504981937 +0000 UTC m=+0.017780333 container died bf18e844a1172900934b0fd068c13f63b81545ef58959a9245876ea72ff1bf89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 11:58:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-197f0b5d1d1cd811b5f0da1b65dafa55a8ab5f086972e7523db839be6ea2c523-merged.mount: Deactivated successfully.
Nov 26 11:58:44 compute-0 podman[254477]: 2025-11-26 11:58:44.533542634 +0000 UTC m=+0.046341041 container remove bf18e844a1172900934b0fd068c13f63b81545ef58959a9245876ea72ff1bf89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_driscoll, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 11:58:44 compute-0 systemd[1]: libpod-conmon-bf18e844a1172900934b0fd068c13f63b81545ef58959a9245876ea72ff1bf89.scope: Deactivated successfully.
Nov 26 11:58:44 compute-0 sudo[254323]: pam_unix(sudo:session): session closed for user root
Nov 26 11:58:44 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 11:58:44 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:58:44 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 11:58:44 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:58:44 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev 2f18fd43-41b9-43c7-94d0-c88a701c109b does not exist
Nov 26 11:58:44 compute-0 ceph-mgr[75197]: [progress WARNING root] complete: ev bad7c7dc-883a-4158-94bf-44aed94ddf30 does not exist
Nov 26 11:58:44 compute-0 sudo[254488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 11:58:44 compute-0 sudo[254488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:58:44 compute-0 sudo[254488]: pam_unix(sudo:session): session closed for user root
Nov 26 11:58:44 compute-0 sudo[254513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 26 11:58:44 compute-0 sudo[254513]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 11:58:44 compute-0 sudo[254513]: pam_unix(sudo:session): session closed for user root
Nov 26 11:58:44 compute-0 ceph-mon[74928]: pgmap v705: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:58:44 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:58:44 compute-0 ceph-mon[74928]: from='mgr.14132 192.168.122.100:0/911431002' entity='mgr.compute-0.mwrktr' 
Nov 26 11:58:45 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v706: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:58:46 compute-0 sshd-session[254538]: Accepted publickey for zuul from 192.168.122.10 port 59962 ssh2: ECDSA SHA256:Ri1FttUDYah1mI7cQP4QF8tE4Jqe/KzhJvhCRDggR5A
Nov 26 11:58:46 compute-0 systemd-logind[744]: New session 51 of user zuul.
Nov 26 11:58:46 compute-0 systemd[1]: Started Session 51 of User zuul.
Nov 26 11:58:46 compute-0 sshd-session[254538]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 26 11:58:46 compute-0 sudo[254542]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Nov 26 11:58:46 compute-0 sudo[254542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 11:58:46 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:58:46 compute-0 ceph-mon[74928]: pgmap v706: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:58:47 compute-0 podman[254576]: 2025-11-26 11:58:47.114316482 +0000 UTC m=+0.068505299 container health_status 5f401b83ca5f86ed33da7d010b1da1561564f07ce95b2e756c93a36d59e54803 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 26 11:58:47 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v707: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:58:48 compute-0 ceph-mgr[75197]: log_channel(audit) log [DBG] : from='client.14395 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:58:48 compute-0 ceph-mgr[75197]: log_channel(audit) log [DBG] : from='client.14397 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:58:48 compute-0 ceph-mon[74928]: pgmap v707: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:58:48 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 26 11:58:48 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3152996711' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 26 11:58:49 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v708: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:58:49 compute-0 ceph-mon[74928]: from='client.14395 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:58:49 compute-0 ceph-mon[74928]: from='client.14397 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:58:49 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3152996711' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 26 11:58:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 11:58:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:58:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 11:58:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:58:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:58:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:58:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:58:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:58:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:58:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:58:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:58:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:58:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 26 11:58:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:58:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:58:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:58:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 11:58:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:58:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 11:58:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:58:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:58:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:58:50 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 11:58:50 compute-0 ceph-mon[74928]: pgmap v708: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:58:51 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:58:51 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v709: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:58:52 compute-0 ceph-mon[74928]: pgmap v709: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:58:53 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v710: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:58:54 compute-0 ceph-mon[74928]: pgmap v710: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:58:55 compute-0 ovs-vsctl[254869]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 26 11:58:55 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v711: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:58:55 compute-0 virtqemud[247765]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 26 11:58:55 compute-0 virtqemud[247765]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 26 11:58:55 compute-0 virtqemud[247765]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 26 11:58:56 compute-0 ceph-mds[100145]: mds.cephfs.compute-0.hvqwax asok_command: cache status {prefix=cache status} (starting...)
Nov 26 11:58:56 compute-0 ceph-mds[100145]: mds.cephfs.compute-0.hvqwax asok_command: client ls {prefix=client ls} (starting...)
Nov 26 11:58:56 compute-0 lvm[255176]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 26 11:58:56 compute-0 lvm[255176]: VG ceph_vg1 finished
Nov 26 11:58:56 compute-0 lvm[255174]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 26 11:58:56 compute-0 lvm[255174]: VG ceph_vg2 finished
Nov 26 11:58:56 compute-0 lvm[255189]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 26 11:58:56 compute-0 lvm[255189]: VG ceph_vg0 finished
Nov 26 11:58:56 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:58:56 compute-0 podman[255214]: 2025-11-26 11:58:56.737384826 +0000 UTC m=+0.072811026 container health_status cfbeb1268b73bea59211ec8c025ee3818e11127880f6c0675b5fbea39bdd577e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 26 11:58:56 compute-0 ceph-mgr[75197]: log_channel(audit) log [DBG] : from='client.14401 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:58:56 compute-0 ceph-mon[74928]: pgmap v711: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:58:57 compute-0 ceph-mds[100145]: mds.cephfs.compute-0.hvqwax asok_command: damage ls {prefix=damage ls} (starting...)
Nov 26 11:58:57 compute-0 ceph-mgr[75197]: log_channel(audit) log [DBG] : from='client.14403 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:58:57 compute-0 ceph-mds[100145]: mds.cephfs.compute-0.hvqwax asok_command: dump loads {prefix=dump loads} (starting...)
Nov 26 11:58:57 compute-0 ceph-mds[100145]: mds.cephfs.compute-0.hvqwax asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Nov 26 11:58:57 compute-0 ceph-mds[100145]: mds.cephfs.compute-0.hvqwax asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Nov 26 11:58:57 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0) v1
Nov 26 11:58:57 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3537224521' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 26 11:58:57 compute-0 ceph-mds[100145]: mds.cephfs.compute-0.hvqwax asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Nov 26 11:58:57 compute-0 ceph-mds[100145]: mds.cephfs.compute-0.hvqwax asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Nov 26 11:58:57 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v712: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:58:57 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 11:58:57 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1702359517' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:58:57 compute-0 ceph-mds[100145]: mds.cephfs.compute-0.hvqwax asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Nov 26 11:58:57 compute-0 ceph-mon[74928]: from='client.14401 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:58:57 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3537224521' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 26 11:58:57 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1702359517' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 11:58:57 compute-0 ceph-mgr[75197]: log_channel(audit) log [DBG] : from='client.14409 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:58:57 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-mwrktr[75193]: 2025-11-26T11:58:57.856+0000 7fc9b4913640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 26 11:58:57 compute-0 ceph-mgr[75197]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 26 11:58:57 compute-0 ceph-mds[100145]: mds.cephfs.compute-0.hvqwax asok_command: get subtrees {prefix=get subtrees} (starting...)
Nov 26 11:58:58 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0) v1
Nov 26 11:58:58 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3509995728' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 26 11:58:58 compute-0 ceph-mds[100145]: mds.cephfs.compute-0.hvqwax asok_command: ops {prefix=ops} (starting...)
Nov 26 11:58:58 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Nov 26 11:58:58 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2513605242' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 26 11:58:58 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Nov 26 11:58:58 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1347091930' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 26 11:58:58 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 26 11:58:58 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3899317378' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 26 11:58:58 compute-0 ceph-mds[100145]: mds.cephfs.compute-0.hvqwax asok_command: session ls {prefix=session ls} (starting...)
Nov 26 11:58:58 compute-0 ceph-mgr[75197]: log_channel(audit) log [DBG] : from='client.14421 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:58:58 compute-0 ceph-mds[100145]: mds.cephfs.compute-0.hvqwax asok_command: status {prefix=status} (starting...)
Nov 26 11:58:58 compute-0 ceph-mon[74928]: from='client.14403 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:58:58 compute-0 ceph-mon[74928]: pgmap v712: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:58:58 compute-0 ceph-mon[74928]: from='client.14409 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:58:58 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3509995728' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 26 11:58:58 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/2513605242' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 26 11:58:58 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1347091930' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 26 11:58:58 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3899317378' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 26 11:58:58 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 26 11:58:58 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3674867218' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 26 11:58:59 compute-0 ceph-mgr[75197]: log_channel(audit) log [DBG] : from='client.14425 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:58:59 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 26 11:58:59 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1686108385' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 26 11:58:59 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0) v1
Nov 26 11:58:59 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2669557270' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 26 11:58:59 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 26 11:58:59 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3982088416' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 26 11:58:59 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v713: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:58:59 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Nov 26 11:58:59 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/944000156' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 26 11:58:59 compute-0 ceph-mon[74928]: from='client.14421 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:58:59 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3674867218' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 26 11:58:59 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1686108385' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 26 11:58:59 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/2669557270' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 26 11:58:59 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3982088416' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 26 11:58:59 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/944000156' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 26 11:59:00 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 26 11:59:00 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3150554502' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 26 11:59:00 compute-0 ceph-mgr[75197]: log_channel(audit) log [DBG] : from='client.14437 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:59:00 compute-0 ceph-mgr[75197]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 26 11:59:00 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-mwrktr[75193]: 2025-11-26T11:59:00.145+0000 7fc9b4913640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 26 11:59:00 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 26 11:59:00 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1925222839' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 26 11:59:00 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Nov 26 11:59:00 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/24307029' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 26 11:59:00 compute-0 ceph-mgr[75197]: log_channel(audit) log [DBG] : from='client.14443 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:59:00 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Nov 26 11:59:00 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2403819862' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 26 11:59:00 compute-0 ceph-mon[74928]: from='client.14425 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:59:00 compute-0 ceph-mon[74928]: pgmap v713: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:59:00 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3150554502' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 26 11:59:00 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1925222839' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 26 11:59:00 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/24307029' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 26 11:59:00 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/2403819862' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 26 11:59:00 compute-0 ceph-mgr[75197]: log_channel(audit) log [DBG] : from='client.14447 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:59:01 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 26 11:59:01 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/488136620' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 26 11:59:01 compute-0 ceph-mgr[75197]: log_channel(audit) log [DBG] : from='client.14451 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:59:01 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 26 11:59:01 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/721172483' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 26 11:59:01 compute-0 ceph-mgr[75197]: log_channel(audit) log [DBG] : from='client.14455 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:59:01 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:59:01 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v714: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:59:01 compute-0 ceph-mon[74928]: from='client.14437 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:59:01 compute-0 ceph-mon[74928]: from='client.14443 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:59:01 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/488136620' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 26 11:59:01 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/721172483' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 26 11:59:01 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 26 11:59:01 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/571159541' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 26 11:59:01 compute-0 ceph-mgr[75197]: log_channel(audit) log [DBG] : from='client.14459 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 41 handle_osd_map epochs [42,43], i have 41, src has [1,43]
Nov 26 11:59:01 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 37) v1
Nov 26 11:59:01 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:40:46.760483+0000 osd.2 (osd.2) 36 : cluster [DBG] 5.1b deep-scrub starts
Nov 26 11:59:01 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:40:46.774652+0000 osd.2 (osd.2) 37 : cluster [DBG] 5.1b deep-scrub ok
Nov 26 11:59:01 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Nov 26 11:59:01 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Nov 26 11:59:01 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:01 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:01 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:18.330824+0000)
Nov 26 11:59:01 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 39 sent 37 num 2 unsent 2 sending 2
Nov 26 11:59:01 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:40:47.742025+0000 osd.2 (osd.2) 38 : cluster [DBG] 5.1c scrub starts
Nov 26 11:59:01 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:40:47.756157+0000 osd.2 (osd.2) 39 : cluster [DBG] 5.1c scrub ok
Nov 26 11:59:01 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: monclient: handle_auth_request added challenge on 0x55fea6975c00
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 43 ms_handle_reset con 0x55fea6975c00 session 0x55fea6eea960
Nov 26 11:59:01 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:01 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:01 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 56688640 unmapped: 2940928 heap: 59629568 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:01 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 325951 data_alloc: 218103808 data_used: 40960
Nov 26 11:59:01 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 39) v1
Nov 26 11:59:01 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:40:47.742025+0000 osd.2 (osd.2) 38 : cluster [DBG] 5.1c scrub starts
Nov 26 11:59:01 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:40:47.756157+0000 osd.2 (osd.2) 39 : cluster [DBG] 5.1c scrub ok
Nov 26 11:59:01 compute-0 ceph-osd[90047]: monclient: handle_auth_request added challenge on 0x55fea79bec00
Nov 26 11:59:01 compute-0 ceph-osd[90047]: monclient: _renew_subs
Nov 26 11:59:01 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 43 handle_osd_map epochs [44,44], i have 43, src has [1,44]
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 44 ms_handle_reset con 0x55fea79bec00 session 0x55fea6b6fe00
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 44 heartbeat osd_stat(store_statfs(0x4fe15d000/0x0/0x4ffc00000, data 0x316cd/0x70000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 11:59:01 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:01 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:01 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:19.330953+0000)
Nov 26 11:59:01 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 56819712 unmapped: 2809856 heap: 59629568 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 44 handle_osd_map epochs [45,45], i have 44, src has [1,45]
Nov 26 11:59:01 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:01 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:01 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:20.331114+0000)
Nov 26 11:59:01 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 56836096 unmapped: 2793472 heap: 59629568 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 45 handle_osd_map epochs [46,46], i have 45, src has [1,46]
Nov 26 11:59:01 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:01 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:01 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:21.331271+0000)
Nov 26 11:59:01 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 56967168 unmapped: 2662400 heap: 59629568 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:01 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:01 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:01 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:22.331421+0000)
Nov 26 11:59:01 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 56975360 unmapped: 2654208 heap: 59629568 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:01 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 5.1f deep-scrub starts
Nov 26 11:59:01 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 5.1f deep-scrub ok
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x36c20/0x79000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 46 handle_osd_map epochs [47,48], i have 46, src has [1,48]
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.0( v 44'64 (0'0,44'64] local-lis/les=36/37 n=8 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=36) [2] r=0 lpr=36 crt=44'64 lcod 44'63 mlcod 44'63 active+clean] exit Started/Primary/Active/Clean 43.234770 31 0.000080
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.0( v 44'64 (0'0,44'64] local-lis/les=36/37 n=8 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=36) [2] r=0 lpr=36 crt=44'64 lcod 44'63 mlcod 44'63 active mbc={}] exit Started/Primary/Active 43.235855 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.0( v 44'64 (0'0,44'64] local-lis/les=36/37 n=8 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=36) [2] r=0 lpr=36 crt=44'64 lcod 44'63 mlcod 44'63 active mbc={}] exit Started/Primary 43.701775 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.0( v 44'64 (0'0,44'64] local-lis/les=36/37 n=8 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=36) [2] r=0 lpr=36 crt=44'64 lcod 44'63 mlcod 44'63 active mbc={}] exit Started 43.701814 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.0( v 44'64 (0'0,44'64] local-lis/les=36/37 n=8 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=36) [2] r=0 lpr=36 crt=44'64 lcod 44'63 mlcod 44'63 active mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.0( v 44'64 (0'0,44'64] local-lis/les=36/37 n=8 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47 pruub=12.765938759s) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 44'63 active pruub 97.784805298s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.1(unlocked)] enter Initial
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.2(unlocked)] enter Initial
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.3(unlocked)] enter Initial
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.4(unlocked)] enter Initial
Nov 26 11:59:01 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:01 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:01 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:23.331524+0000)
Nov 26 11:59:01 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 41 sent 39 num 2 unsent 2 sending 2
Nov 26 11:59:01 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:40:52.806688+0000 osd.2 (osd.2) 40 : cluster [DBG] 5.1f deep-scrub starts
Nov 26 11:59:01 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:40:52.820349+0000 osd.2 (osd.2) 41 : cluster [DBG] 5.1f deep-scrub ok
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.5(unlocked)] enter Initial
Nov 26 11:59:01 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.6(unlocked)] enter Initial
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.7(unlocked)] enter Initial
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.8(unlocked)] enter Initial
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.9(unlocked)] enter Initial
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.a(unlocked)] enter Initial
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.b(unlocked)] enter Initial
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.c(unlocked)] enter Initial
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.d(unlocked)] enter Initial
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.e(unlocked)] enter Initial
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.f(unlocked)] enter Initial
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.10(unlocked)] enter Initial
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.11(unlocked)] enter Initial
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.12(unlocked)] enter Initial
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.13(unlocked)] enter Initial
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.14(unlocked)] enter Initial
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.15(unlocked)] enter Initial
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.16(unlocked)] enter Initial
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.17(unlocked)] enter Initial
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.18(unlocked)] enter Initial
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.19(unlocked)] enter Initial
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.1a(unlocked)] enter Initial
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.1b(unlocked)] enter Initial
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.1c(unlocked)] enter Initial
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.1d(unlocked)] enter Initial
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.1e(unlocked)] enter Initial
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.1f(unlocked)] enter Initial
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.0( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47 pruub=12.765938759s) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 unknown pruub 97.784805298s@ mbc={}] exit Reset 0.001368 2 0.000145
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.0( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47 pruub=12.765938759s) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 unknown pruub 97.784805298s@ mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.0( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47 pruub=12.765938759s) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 unknown pruub 97.784805298s@ mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.0( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47 pruub=12.765938759s) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 unknown pruub 97.784805298s@ mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.0( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47 pruub=12.765938759s) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 unknown pruub 97.784805298s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.0( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47 pruub=12.765938759s) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 unknown pruub 97.784805298s@ mbc={}] enter Started/Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.0( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47 pruub=12.765938759s) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 unknown pruub 97.784805298s@ mbc={}] enter Started/Primary/Peering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.0( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47 pruub=12.765938759s) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 peering pruub 97.784805298s@ mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.0( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47 pruub=12.765938759s) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 peering pruub 97.784805298s@ mbc={}] exit Started/Primary/Peering/GetInfo 0.000009 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.0( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47 pruub=12.765938759s) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 peering pruub 97.784805298s@ mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.0( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47 pruub=12.765938759s) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 peering pruub 97.784805298s@ mbc={}] exit Started/Primary/Peering/GetLog 0.000020 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.0( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47 pruub=12.765938759s) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 peering pruub 97.784805298s@ mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.0( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47 pruub=12.765938759s) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 peering pruub 97.784805298s@ mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.0( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47 pruub=12.765938759s) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 peering pruub 97.784805298s@ mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.001399 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.18( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.001245 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.18( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000117 2 0.000097
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000010 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000020 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.14( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.001563 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.14( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.18( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000191 2 0.000052
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.18( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.18( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.18( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.18( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000010 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.001940 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.18( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.18( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.18( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.18( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000008 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.18( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.18( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000012 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.18( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.18( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.18( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.14( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000134 2 0.000039
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.14( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.14( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.14( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.14( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.14( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.14( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.14( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.14( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000006 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.14( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.14( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000009 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.14( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.14( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.14( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000140 2 0.000022
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000006 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000007 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.002139 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.002118 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000047 2 0.000038
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.1b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.001750 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000007 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000010 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000105 2 0.000034
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000006 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000009 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.1b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.002220 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000237 2 0.000211
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000008 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000015 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.002392 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000168 2 0.000050
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000006 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000011 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.002969 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000234 2 0.000043
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000005 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000009 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.002947 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000123 2 0.000140
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000006 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000009 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.002789 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000161 2 0.000155
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000006 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000008 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.9( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.003461 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000122 2 0.000114
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000006 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000008 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.9( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.003018 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.9( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000118 2 0.000113
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.9( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.9( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.9( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.9( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.9( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.9( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.9( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.9( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000006 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.9( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.9( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000008 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.9( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.9( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.9( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.003963 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000142 2 0.000116
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000004 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000008 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.003461 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000177 2 0.000038
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000006 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000010 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.3( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.004290 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000174 2 0.000161
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000005 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000008 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.3( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.1c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.003536 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.1c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.3( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000139 2 0.000142
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.3( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.3( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.3( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.3( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.3( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.3( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.3( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.3( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000005 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.3( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.3( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000008 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.3( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.3( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.3( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.004425 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000123 2 0.000033
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000006 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000008 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.004645 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000124 2 0.000120
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000006 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000008 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.1d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.003886 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.1d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000068 2 0.000113
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000004 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000008 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.15( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.004294 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000181 2 0.000065
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000006 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000017 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.15( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.004984 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.15( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000058 2 0.000124
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.15( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.15( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.15( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.15( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.15( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.15( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.15( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.15( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000006 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.15( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.15( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000009 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.15( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.15( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.15( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.004719 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000156 2 0.000137
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000005 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000008 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.004639 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000156 2 0.000156
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000006 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000008 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.004770 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000040 2 0.000143
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000004 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000009 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.004776 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000119 2 0.000036
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.1f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.004794 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000006 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000008 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000097 2 0.000045
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000005 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000007 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.1f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.5( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.005886 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.5( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.005907 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000062 2 0.000169
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000006 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000008 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000013 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.5( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000102 2 0.000039
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.5( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.5( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.5( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.5( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.5( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.5( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.5( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.5( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000005 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.5( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.5( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000007 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.5( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.5( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.5( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.006330 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000212 2 0.000192
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000006 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000008 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 47 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000053 2 0.000128
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000090 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000018 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000023 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000013 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 48 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:01 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:01 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:01 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 57425920 unmapped: 2203648 heap: 59629568 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:01 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 384605 data_alloc: 218103808 data_used: 40960
Nov 26 11:59:01 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 41) v1
Nov 26 11:59:01 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:40:52.806688+0000 osd.2 (osd.2) 40 : cluster [DBG] 5.1f deep-scrub starts
Nov 26 11:59:01 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:40:52.820349+0000 osd.2 (osd.2) 41 : cluster [DBG] 5.1f deep-scrub ok
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 48 handle_osd_map epochs [48,49], i have 48, src has [1,49]
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.327512 4 0.000056
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.327562 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.1b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.330467 4 0.000085
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.1b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.330545 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.1b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.1b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.327841 4 0.000081
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.327886 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.330000 4 0.000055
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.331106 4 0.000053
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.331170 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.330179 4 0.000063
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.329311 4 0.000058
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.329352 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.330266 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.3( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.329130 4 0.000072
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.3( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.329233 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.3( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.3( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.330624 4 0.000078
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.330710 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.330469 4 0.000072
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.330520 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.330067 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.328327 4 0.000055
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.328358 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.1f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.327796 4 0.000083
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.1f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.327839 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.1f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.1f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.1d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.328842 4 0.000063
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.1d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.328887 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.1d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.1c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.329306 4 0.000059
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.1c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.329338 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.1d( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.1c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.1c( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.328161 4 0.000081
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.328195 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.329277 4 0.000052
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.329308 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.18( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.331628 4 0.000105
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.327777 4 0.000052
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.18( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.331728 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.327818 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.18( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.18( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.5( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.327955 4 0.000044
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.5( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.327984 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.5( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.5( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.329194 4 0.000077
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.329223 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.328757 4 0.000054
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.328787 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.331987 4 0.000090
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.332040 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.9( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.330305 4 0.000054
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.9( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.330337 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.9( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.9( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.331524 4 0.000086
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.331562 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.331455 4 0.000067
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.331490 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.c( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.0( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47 pruub=12.765938759s) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 peering pruub 97.784805298s@ mbc={}] exit Started/Primary/Peering/WaitUpThru 0.333029 3 0.000096
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.327621 4 0.000288
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.327746 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.330135 4 0.000074
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.330175 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.14( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.331920 4 0.000056
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.0( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47 pruub=12.765938759s) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 peering pruub 97.784805298s@ mbc={}] exit Started/Primary/Peering 0.333104 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.0( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47 pruub=12.765938759s) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 unknown pruub 97.784805298s@ mbc={}] enter Started/Primary/Active
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.15( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.329154 4 0.000071
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.15( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.329188 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.0( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.14( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.331968 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.15( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.14( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.15( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.330707 4 0.000053
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.330739 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.14( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.330484 4 0.000048
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.330514 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.d( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.1b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.1b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001345 3 0.000212
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.1b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.1b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.1b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.3( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.1f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002410 3 0.000119
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002314 3 0.000086
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.3( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002173 3 0.000110
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.3( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.3( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.3( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002166 3 0.000163
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002112 3 0.000107
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002096 3 0.000028
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.1f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002083 3 0.000034
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.1f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.1f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.1f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002286 3 0.000257
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002181 3 0.000060
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002692 3 0.000643
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.1c( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.1d( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.5( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.9( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.c( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.0( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.15( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.1d( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002789 3 0.000068
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.1d( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.1d( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.1d( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.5( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002650 3 0.000027
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.5( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.5( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.5( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002699 3 0.000100
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002673 3 0.000037
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002681 3 0.000029
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002683 3 0.000027
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.9( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002687 3 0.000045
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.9( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.9( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.9( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002836 3 0.000215
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002661 3 0.000022
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.c( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002706 3 0.000081
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.1c( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002565 3 0.000150
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.c( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.c( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.1c( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.c( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.1c( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.1c( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002643 3 0.000046
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002628 3 0.000041
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.0( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=36/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002579 3 0.000163
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.0( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=36/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.0( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=36/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.0( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=36/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.15( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002550 3 0.000060
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.15( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.15( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.15( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002543 3 0.000030
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.18( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.18( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003198 3 0.000819
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.18( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.18( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.18( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003356 3 0.000065
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.14( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.d( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002921 3 0.000031
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.d( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002845 3 0.001018
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.d( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.d( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.d( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.14( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003020 3 0.000173
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.14( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.14( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 49 pg[10.14( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:01 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:01 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:01 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:24.331661+0000)
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 49 heartbeat osd_stat(store_statfs(0x4fe14b000/0x0/0x4ffc00000, data 0x3bd75/0x82000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 11:59:01 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 57704448 unmapped: 1925120 heap: 59629568 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:01 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:01 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:01 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:25.331814+0000)
Nov 26 11:59:01 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 57704448 unmapped: 1925120 heap: 59629568 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:01 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Nov 26 11:59:01 compute-0 ceph-osd[90047]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.937380791s of 10.020479202s, submitted: 199
Nov 26 11:59:01 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Nov 26 11:59:01 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:01 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:01 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:26.331949+0000)
Nov 26 11:59:01 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 43 sent 41 num 2 unsent 2 sending 2
Nov 26 11:59:01 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:40:55.818410+0000 osd.2 (osd.2) 42 : cluster [DBG] 4.1b scrub starts
Nov 26 11:59:01 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:40:55.832519+0000 osd.2 (osd.2) 43 : cluster [DBG] 4.1b scrub ok
Nov 26 11:59:01 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 57696256 unmapped: 1933312 heap: 59629568 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:01 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 43) v1
Nov 26 11:59:01 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:40:55.818410+0000 osd.2 (osd.2) 42 : cluster [DBG] 4.1b scrub starts
Nov 26 11:59:01 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:40:55.832519+0000 osd.2 (osd.2) 43 : cluster [DBG] 4.1b scrub ok
Nov 26 11:59:01 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Nov 26 11:59:01 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 49 heartbeat osd_stat(store_statfs(0x4fe14b000/0x0/0x4ffc00000, data 0x3bd75/0x82000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 11:59:01 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:01 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:01 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:27.332108+0000)
Nov 26 11:59:01 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 45 sent 43 num 2 unsent 2 sending 2
Nov 26 11:59:01 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:40:56.806494+0000 osd.2 (osd.2) 44 : cluster [DBG] 4.1c scrub starts
Nov 26 11:59:01 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:40:56.820657+0000 osd.2 (osd.2) 45 : cluster [DBG] 4.1c scrub ok
Nov 26 11:59:01 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 57712640 unmapped: 1916928 heap: 59629568 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:01 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 45) v1
Nov 26 11:59:01 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:40:56.806494+0000 osd.2 (osd.2) 44 : cluster [DBG] 4.1c scrub starts
Nov 26 11:59:01 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:40:56.820657+0000 osd.2 (osd.2) 45 : cluster [DBG] 4.1c scrub ok
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 49 heartbeat osd_stat(store_statfs(0x4fe14c000/0x0/0x4ffc00000, data 0x3bd75/0x82000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 11:59:01 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:01 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:01 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:28.332247+0000)
Nov 26 11:59:01 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:01 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:01 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 57794560 unmapped: 1835008 heap: 59629568 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:01 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 388113 data_alloc: 218103808 data_used: 49152
Nov 26 11:59:01 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 4.a deep-scrub starts
Nov 26 11:59:01 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 4.a deep-scrub ok
Nov 26 11:59:01 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:01 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:01 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:29.332354+0000)
Nov 26 11:59:01 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 47 sent 45 num 2 unsent 2 sending 2
Nov 26 11:59:01 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:40:58.819900+0000 osd.2 (osd.2) 46 : cluster [DBG] 4.a deep-scrub starts
Nov 26 11:59:01 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:40:58.834031+0000 osd.2 (osd.2) 47 : cluster [DBG] 4.a deep-scrub ok
Nov 26 11:59:01 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 57819136 unmapped: 1810432 heap: 59629568 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:01 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 47) v1
Nov 26 11:59:01 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:40:58.819900+0000 osd.2 (osd.2) 46 : cluster [DBG] 4.a deep-scrub starts
Nov 26 11:59:01 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:40:58.834031+0000 osd.2 (osd.2) 47 : cluster [DBG] 4.a deep-scrub ok
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 49 handle_osd_map epochs [50,50], i have 49, src has [1,50]
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 49 handle_osd_map epochs [50,50], i have 50, src has [1,50]
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 49 handle_osd_map epochs [50,50], i have 50, src has [1,50]
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 49 handle_osd_map epochs [50,50], i have 50, src has [1,50]
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.1a(unlocked)] enter Initial
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.1a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000045 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.1a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.1a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000017 1 0.000035
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.1a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.1a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.1a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.1a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000103 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.1a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.1a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.1a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.1a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000162 1 0.000201
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.1a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.15(unlocked)] enter Initial
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.15( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000051 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.15( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.15( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000014
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.15( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.15( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.15( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.15( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.15( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.15( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.15( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.15( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000066 1 0.000027
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.15( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.15(unlocked)] enter Initial
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.15( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000038 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.15( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.15( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000008
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.15( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.15( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.15( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.15( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.15( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.15( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.15( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.15( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000062 1 0.000021
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.15( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:01 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.3(unlocked)] enter Initial
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.3( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000089 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.3( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.3( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000017 1 0.000037
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.3( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.3( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.3( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.3( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000141 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.3( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.3( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.3( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.3( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000353 1 0.000235
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.3( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.c(unlocked)] enter Initial
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000025 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000015
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000110 1 0.000030
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.2(unlocked)] enter Initial
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.2( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000077 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.2( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.2( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000033
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.2( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.2( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.2( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.2( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.2( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.2( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.2( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.2( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000088 1 0.000027
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.2( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.d(unlocked)] enter Initial
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.d( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000031 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.d( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.d( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000010
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.d( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.d( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.d( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.d( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.d( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.d( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.d( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.d( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000057 1 0.000022
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.d( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.1(unlocked)] enter Initial
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.1( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000019 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.1( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.1( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000008
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.1( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.1( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.1( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.1( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.1( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.1( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.1( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.1( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000041 1 0.000023
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.1( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.8(unlocked)] enter Initial
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.8( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000016 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.8( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.8( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000003 1 0.000006
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.8( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.8( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.8( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.8( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.8( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.8( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.8( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.8( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000046 1 0.000021
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.8( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.2(unlocked)] enter Initial
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.2( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000017 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.2( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.2( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000003 1 0.000006
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.2( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.2( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.2( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.2( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.2( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.2( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.2( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.2( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000053 1 0.000036
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.2( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.d(unlocked)] enter Initial
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.d( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000026 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.d( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.d( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000007
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.d( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.d( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.d( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.d( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.d( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.d( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.d( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.d( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000036 1 0.000018
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.d( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.9(unlocked)] enter Initial
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.9( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000010 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.9( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.9( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000003 1 0.000006
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.9( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.9( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.9( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.9( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.9( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.9( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.9( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.9( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000025 1 0.000017
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.9( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.5(unlocked)] enter Initial
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.5( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000010 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.5( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.5( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000003 1 0.000006
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.5( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.5( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.5( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.5( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.5( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.5( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.5( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.5( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000028 1 0.000022
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.5( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.8(unlocked)] enter Initial
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.8( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000010 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.8( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.8( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000003 1 0.000006
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.8( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.8( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.8( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.8( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.8( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.8( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.8( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.8( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000027 1 0.000017
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.8( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.4(unlocked)] enter Initial
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.4( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000010 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.4( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.4( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000003 1 0.000005
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.4( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.4( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.4( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.4( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.4( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.4( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.4( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.4( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000035 1 0.000016
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.4( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.a(unlocked)] enter Initial
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000010 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000003 1 0.000006
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000024 1 0.000017
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.15(unlocked)] enter Initial
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.15( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000015 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.15( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.15( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000003 1 0.000006
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.15( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.15( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.15( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.15( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.15( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.15( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.15( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.15( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000038 1 0.000016
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.15( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1b(unlocked)] enter Initial
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000016 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000003 1 0.000006
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000037 1 0.000026
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.11(unlocked)] enter Initial
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.11( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000015 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.11( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.11( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000010
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.11( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.11( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.11( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.11( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.11( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.11( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.11( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.11( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000121 1 0.000020
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.11( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1c(unlocked)] enter Initial
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1c( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000018 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1c( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1c( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000011
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1c( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1c( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1c( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1c( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1c( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1c( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1c( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1c( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000044 1 0.000022
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1c( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1e(unlocked)] enter Initial
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1e( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000016 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1e( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1e( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000012 1 0.000018
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1e( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1e( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1e( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1e( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1e( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1e( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1e( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1e( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000036 1 0.000026
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1e( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.11(unlocked)] enter Initial
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.11( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000016 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.11( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.11( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000003 1 0.000010
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.11( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.11( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.11( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.11( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.11( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.11( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.11( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.11( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000033 1 0.000020
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.11( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.12(unlocked)] enter Initial
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.12( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000015 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.12( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.12( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000010
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.12( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.12( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.12( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.12( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.12( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.12( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.12( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.12( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000031 1 0.000025
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.12( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.1c(unlocked)] enter Initial
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.1c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000016 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.1c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.1c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000010
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.1c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.1c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.1c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.1c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.1c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.1c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.1c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.1c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000032 1 0.000020
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.1c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.12(unlocked)] enter Initial
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.12( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000015 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.12( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.12( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000010
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.12( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.12( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.12( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.12( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.12( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.12( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.12( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.12( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000030 1 0.000020
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.12( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.11(unlocked)] enter Initial
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.11( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000015 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.11( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.11( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000009
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.11( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.11( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.11( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.11( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.11( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.11( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.11( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.11( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000058 1 0.000042
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.11( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.e(unlocked)] enter Initial
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.e( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000021 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.e( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.e( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000011
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.e( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.e( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.e( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.e( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.e( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.e( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.e( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.e( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000036 1 0.000020
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.e( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.2(unlocked)] enter Initial
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.2( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000010 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.2( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.2( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000003 1 0.000006
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.2( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.2( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.2( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.2( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.2( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.2( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.2( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.2( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000043 1 0.000017
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.2( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.b(unlocked)] enter Initial
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000020 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000010 1 0.000013
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000025 1 0.000018
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.18(unlocked)] enter Initial
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.18( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000010 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.18( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.18( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000006
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.18( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.18( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.18( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.18( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.18( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.18( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.18( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.18( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000024 1 0.000016
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.18( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.1b(unlocked)] enter Initial
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.1b( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000017 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.1b( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.1b( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000003 1 0.000006
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.1b( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.1b( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.1b( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.1b( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.1b( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.1b( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.1b( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.1b( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000033 1 0.000016
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.1b( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1a(unlocked)] enter Initial
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1a( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000010 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1a( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1a( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000006
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1a( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1a( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1a( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1a( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1a( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1a( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1a( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1a( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000036 1 0.000017
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1a( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1f(unlocked)] enter Initial
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1f( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000020 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1f( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1f( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000011 1 0.000014
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1f( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1f( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1f( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1f( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1f( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1f( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1f( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1f( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000028 1 0.000023
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1f( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.1c(unlocked)] enter Initial
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.1c( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000015 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.1c( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.1c( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000003 1 0.000006
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.1c( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.1c( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.1c( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.1c( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.1c( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.1c( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.1c( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.1c( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000033 1 0.000016
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.1c( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 6.034170 1 0.000014
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 6.036624 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 6.364196 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 6.364214 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.1e] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965672493s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 101.354835510s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.1e] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965634346s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.354835510s@ mbc={}] exit Reset 0.000048 1 0.000060
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965634346s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.354835510s@ mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965634346s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.354835510s@ mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965634346s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.354835510s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965634346s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.354835510s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965634346s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.354835510s@ mbc={}] enter Started/Stray
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 6.034291 1 0.000020
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 6.036641 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 6.364538 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 6.364551 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.19] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965537071s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 101.354843140s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.19] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965523720s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.354843140s@ mbc={}] exit Reset 0.000024 1 0.000036
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965523720s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.354843140s@ mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965523720s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.354843140s@ mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965523720s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.354843140s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965523720s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.354843140s@ mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965523720s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.354843140s@ mbc={}] enter Started/Stray
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active+clean] exit Started/Primary/Active/Clean 6.032835 1 0.000013
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active mbc={}] exit Started/Primary/Active 6.035715 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active mbc={}] exit Started/Primary 6.366711 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active mbc={}] exit Started 6.366724 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.d] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.967114449s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 44'64 active pruub 101.356491089s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.d] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.967093468s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 101.356491089s@ mbc={}] exit Reset 0.000029 1 0.000040
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.967093468s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 101.356491089s@ mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.967093468s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 101.356491089s@ mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.967093468s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 101.356491089s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.967093468s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 101.356491089s@ mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.967093468s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 101.356491089s@ mbc={}] enter Started/Stray
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 6.033989 1 0.000020
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 6.036709 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 6.367891 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 6.367905 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.b] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965385437s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 101.354843140s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.b] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965375900s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.354843140s@ mbc={}] exit Reset 0.000029 1 0.000030
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965375900s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.354843140s@ mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965375900s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.354843140s@ mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965375900s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.354843140s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965375900s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.354843140s@ mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965375900s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.354843140s@ mbc={}] enter Started/Stray
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 6.034454 1 0.000014
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 6.036762 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 6.366135 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 6.366150 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.13] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965397835s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 101.354949951s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.13] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965386391s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.354949951s@ mbc={}] exit Reset 0.000019 1 0.000030
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965386391s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.354949951s@ mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965386391s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.354949951s@ mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965386391s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.354949951s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965386391s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.354949951s@ mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965386391s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.354949951s@ mbc={}] enter Started/Stray
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 6.034580 1 0.000015
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 6.036733 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 6.367472 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 6.367491 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.12] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965286255s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 101.354904175s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.12] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965275764s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.354904175s@ mbc={}] exit Reset 0.000018 1 0.000029
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965275764s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.354904175s@ mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965275764s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.354904175s@ mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965275764s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.354904175s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965275764s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.354904175s@ mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965275764s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.354904175s@ mbc={}] enter Started/Stray
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 6.034583 1 0.000013
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 6.036784 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 6.367317 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 6.367333 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.11] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965224266s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 101.354919434s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.11] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965213776s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.354919434s@ mbc={}] exit Reset 0.000018 1 0.000029
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965213776s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.354919434s@ mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965213776s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.354919434s@ mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965213776s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.354919434s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965213776s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.354919434s@ mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965213776s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.354919434s@ mbc={}] enter Started/Stray
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 6.034692 1 0.000014
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 6.036807 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 6.365174 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 6.365190 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.10] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965167046s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 101.354927063s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.10] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965156555s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.354927063s@ mbc={}] exit Reset 0.000017 1 0.000028
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965156555s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.354927063s@ mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965156555s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.354927063s@ mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965156555s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.354927063s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965156555s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.354927063s@ mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965156555s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.354927063s@ mbc={}] enter Started/Stray
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 6.033385 1 0.000023
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 6.036773 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 6.364978 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 6.364990 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.1a] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965708733s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 101.355575562s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.1a] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965698242s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.355575562s@ mbc={}] exit Reset 0.000018 1 0.000029
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965698242s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.355575562s@ mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965698242s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.355575562s@ mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965698242s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.355575562s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965698242s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.355575562s@ mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965698242s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.355575562s@ mbc={}] enter Started/Stray
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 6.033971 1 0.000011
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 6.036827 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 6.366150 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 6.366167 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.7] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965760231s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 101.355712891s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.7] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965748787s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.355712891s@ mbc={}] exit Reset 0.000018 1 0.000029
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965748787s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.355712891s@ mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965748787s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.355712891s@ mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965748787s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.355712891s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965748787s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.355712891s@ mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965748787s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.355712891s@ mbc={}] enter Started/Stray
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 6.034147 1 0.000013
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 6.036865 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 6.364692 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 6.364708 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.6] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965558052s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 101.355590820s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.6] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965546608s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.355590820s@ mbc={}] exit Reset 0.000019 1 0.000029
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965546608s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.355590820s@ mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965546608s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.355590820s@ mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965546608s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.355590820s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965546608s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.355590820s@ mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965546608s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.355590820s@ mbc={}] enter Started/Stray
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 6.034208 1 0.000010
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 6.036898 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 6.366129 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 6.366140 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.4] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965547562s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 101.355667114s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.4] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965536118s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.355667114s@ mbc={}] exit Reset 0.000029 1 0.000039
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965536118s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.355667114s@ mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965536118s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.355667114s@ mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965536118s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.355667114s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965536118s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.355667114s@ mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965536118s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.355667114s@ mbc={}] enter Started/Stray
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 6.034284 1 0.000013
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 6.036985 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 6.365780 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 6.365797 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.8] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965463638s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 101.355682373s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.8] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965453148s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.355682373s@ mbc={}] exit Reset 0.000029 1 0.000030
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965453148s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.355682373s@ mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965453148s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.355682373s@ mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965453148s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.355682373s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965453148s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.355682373s@ mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965453148s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.355682373s@ mbc={}] enter Started/Stray
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 6.034378 1 0.000011
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 6.037080 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 6.369129 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 6.369145 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.f] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965367317s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 101.355697632s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.f] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965354919s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.355697632s@ mbc={}] exit Reset 0.000022 1 0.000035
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965354919s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.355697632s@ mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965354919s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.355697632s@ mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965354919s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.355697632s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965354919s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.355697632s@ mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965354919s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.355697632s@ mbc={}] enter Started/Stray
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active+clean] exit Started/Primary/Active/Clean 6.034440 1 0.000012
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active mbc={}] exit Started/Primary/Active 6.037144 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active mbc={}] exit Started/Primary 6.367490 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active mbc={}] exit Started 6.367503 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.9] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965303421s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 44'64 active pruub 101.355712891s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.9] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965286255s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 101.355712891s@ mbc={}] exit Reset 0.000025 1 0.000035
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965286255s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 101.355712891s@ mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965286255s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 101.355712891s@ mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965286255s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 101.355712891s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965286255s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 101.355712891s@ mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active+clean] exit Started/Primary/Active/Clean 6.034475 1 0.000017
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active mbc={}] exit Started/Primary/Active 6.037166 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active mbc={}] exit Started/Primary 6.368666 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 6.034449 1 0.000011
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active mbc={}] exit Started 6.368688 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 6.037114 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 6.364889 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 6.365019 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.1] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965250969s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 101.355758667s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.1] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965239525s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.355758667s@ mbc={}] exit Reset 0.000036 1 0.000036
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965239525s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.355758667s@ mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965239525s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.355758667s@ mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965239525s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.355758667s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965239525s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.355758667s@ mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965239525s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.355758667s@ mbc={}] enter Started/Stray
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 6.034546 1 0.000013
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 6.037207 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 6.367391 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 6.367404 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.2] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965145111s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 101.355773926s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.2] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965133667s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.355773926s@ mbc={}] exit Reset 0.000022 1 0.000040
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965133667s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.355773926s@ mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965133667s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.355773926s@ mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965133667s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.355773926s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.e] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965133667s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.355773926s@ mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965133667s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.355773926s@ mbc={}] enter Started/Stray
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965089798s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 44'64 active pruub 101.355735779s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.e] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965060234s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 101.355735779s@ mbc={}] exit Reset 0.000188 1 0.000205
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active+clean] exit Started/Primary/Active/Clean 6.034104 1 0.000017
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965060234s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 101.355735779s@ mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active mbc={}] exit Started/Primary/Active 6.037181 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965060234s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 101.355735779s@ mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active mbc={}] exit Started/Primary 6.369200 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965060234s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 101.355735779s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active mbc={}] exit Started 6.369216 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965060234s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 101.355735779s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965060234s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 101.355735779s@ mbc={}] enter Started/Stray
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.14] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965779305s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 44'64 active pruub 101.356475830s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.14] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965764999s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 101.356475830s@ mbc={}] exit Reset 0.000022 1 0.000033
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965764999s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 101.356475830s@ mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965764999s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 101.356475830s@ mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965764999s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 101.356475830s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965764999s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 101.356475830s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965764999s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 101.356475830s@ mbc={}] enter Started/Stray
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active+clean] exit Started/Primary/Active/Clean 6.034693 1 0.000013
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active mbc={}] exit Started/Primary/Active 6.037290 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active mbc={}] exit Started/Primary 6.366489 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active mbc={}] exit Started 6.366503 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 6.034697 1 0.000011
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 6.037263 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 6.368010 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.15] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 6.368026 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.964997292s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 44'64 active pruub 101.355789185s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.16] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.15] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.964989662s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 101.355796814s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.16] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.964977264s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 101.355789185s@ mbc={}] exit Reset 0.000034 1 0.000045
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.964977264s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 101.355789185s@ mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.964975357s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.355796814s@ mbc={}] exit Reset 0.000025 1 0.000037
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.964977264s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 101.355789185s@ mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.964975357s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.355796814s@ mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.964977264s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 101.355789185s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.964975357s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.355796814s@ mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.964977264s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 101.355789185s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.964975357s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.355796814s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.964977264s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 101.355789185s@ mbc={}] enter Started/Stray
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.964975357s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.355796814s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.964975357s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.355796814s@ mbc={}] enter Started/Stray
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 6.034379 1 0.000013
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 6.037321 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 6.367842 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 6.367854 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965286255s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 101.355712891s@ mbc={}] enter Started/Stray
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.17] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965548515s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 101.356452942s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.17] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965533257s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.356452942s@ mbc={}] exit Reset 0.000027 1 0.000039
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965533257s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.356452942s@ mbc={}] enter Started
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965533257s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.356452942s@ mbc={}] enter Start
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965533257s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.356452942s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965533257s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.356452942s@ mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=9.965533257s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.356452942s@ mbc={}] enter Started/Stray
Nov 26 11:59:01 compute-0 ceph-osd[90047]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.007763 2 0.000173
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:01 compute-0 ceph-osd[90047]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.15( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.007401 2 0.000022
Nov 26 11:59:01 compute-0 ceph-osd[90047]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.15( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.15( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.15( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.15( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.007584 2 0.000023
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.15( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.15( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.15( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:01 compute-0 ceph-osd[90047]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.3( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.006457 2 0.000108
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.3( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.3( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.3( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:01 compute-0 ceph-osd[90047]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.006279 2 0.000078
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:01 compute-0 ceph-osd[90047]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.2( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.006043 2 0.000029
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.2( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.2( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.2( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:01 compute-0 ceph-osd[90047]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.d( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.006065 2 0.000022
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.d( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.d( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.d( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:01 compute-0 ceph-osd[90047]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.005962 2 0.000018
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:01 compute-0 ceph-osd[90047]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.8( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.005865 2 0.000017
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.8( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.8( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.8( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.005753 2 0.000018
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000001 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:01 compute-0 ceph-osd[90047]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.d( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.005661 2 0.000016
Nov 26 11:59:01 compute-0 ceph-osd[90047]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.d( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.9( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.005575 2 0.000014
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.d( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.9( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.d( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.9( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.9( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:01 compute-0 ceph-osd[90047]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.005497 2 0.000015
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:01 compute-0 ceph-osd[90047]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.005425 2 0.000014
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:01 compute-0 ceph-osd[90047]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.4( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.005393 2 0.000016
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.4( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.4( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.4( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:01 compute-0 ceph-osd[90047]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.005409 2 0.000014
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:01 compute-0 ceph-osd[90047]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.005335 2 0.000016
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:01 compute-0 ceph-osd[90047]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1b( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.005253 2 0.000016
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1b( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1b( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1b( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 50 handle_osd_map epochs [50,50], i have 50, src has [1,50]
Nov 26 11:59:01 compute-0 ceph-osd[90047]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.005359 2 0.000025
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000147 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:01 compute-0 ceph-osd[90047]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1e( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.005284 2 0.000018
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1e( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1e( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000009 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1e( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:01 compute-0 ceph-osd[90047]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.11( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.005510 2 0.000017
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.11( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.11( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.11( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:01 compute-0 ceph-osd[90047]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1c( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.006203 2 0.000017
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1c( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1c( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1c( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:01 compute-0 ceph-osd[90047]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.12( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.005883 2 0.000017
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.12( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.12( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.12( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:01 compute-0 ceph-osd[90047]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.005804 2 0.000015
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:01 compute-0 ceph-osd[90047]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.12( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.005753 2 0.000016
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.12( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.12( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000701 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.12( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:01 compute-0 ceph-osd[90047]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.007065 2 0.000016
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:01 compute-0 ceph-osd[90047]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.11( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.007238 2 0.000022
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.11( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.11( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.11( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:01 compute-0 ceph-osd[90047]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.b( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.006919 2 0.000015
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.b( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.b( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.b( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.18( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.006838 2 0.000016
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.18( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.18( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.2( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.007063 2 0.000017
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.2( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.2( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.2( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:01 compute-0 ceph-osd[90047]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.1b( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.006816 2 0.000019
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.1b( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.1b( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.1b( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:01 compute-0 ceph-osd[90047]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1a( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.006729 2 0.000017
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1a( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1a( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1a( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.18( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:01 compute-0 ceph-osd[90047]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1f( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.006715 2 0.000016
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1f( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1f( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[11.1f( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:01 compute-0 ceph-osd[90047]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.1c( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.006667 2 0.000018
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.d] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.1c( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.1c( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 50 pg[8.1c( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.d] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.7] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.7] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.4] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.4] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.8] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.8] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.1] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.1] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.e] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.e] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.15] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.15] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.16] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.16] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.9] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.9] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.17] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.17] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.19] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.19] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.b] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.b] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.13] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.13] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.12] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.12] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.11] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.11] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.10] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.10] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.1a] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.1a] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.6] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.6] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.f] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.f] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.2] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.2] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.14] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.14] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.1e] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.1e] failed. State was: unregistering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:01 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:01 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:30.332522+0000)
Nov 26 11:59:01 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 58679296 unmapped: 950272 heap: 59629568 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 50 handle_osd_map epochs [50,51], i have 50, src has [1,51]
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 50 handle_osd_map epochs [51,51], i have 51, src has [1,51]
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.991215 6 0.000503
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.1c( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.985962 2 0.000042
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.1c( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.992691 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.1c( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.1f( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.986072 2 0.000024
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.1f( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.992836 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.1f( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.1f( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.1a( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.986224 2 0.000067
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.1a( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.993007 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.1a( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.1a( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.18( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.986702 2 0.000142
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.18( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.993841 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.18( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.18( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.1b( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.986282 2 0.000050
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.1b( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.993943 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.1b( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.1b( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.b( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.987269 2 0.000023
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.b( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.994238 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.b( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.b( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.2( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.987284 2 0.000087
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.2( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.994414 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.2( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.2( v 44'2 (0'0,44'2] local-lis/les=50/51 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.987428 2 0.000033
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.994567 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.12( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.988342 2 0.000749
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.12( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.994851 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.12( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.e( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.12( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.11( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.987394 2 0.000069
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.11( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.994795 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.11( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.989208 2 0.000717
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.995077 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.12( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.989267 2 0.000704
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.12( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.995199 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.12( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.11( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.989776 2 0.000327
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.11( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.995363 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.11( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.1c( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 51 handle_osd_map epochs [51,51], i have 51, src has [1,51]
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.1e( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.990176 2 0.000107
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.1e( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.995526 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.1e( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.1e( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.11( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.1c( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.989484 2 0.000913
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.990318 2 0.000211
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.995994 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.1c( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.995782 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.1c( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.1c( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.1b( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.990882 2 0.000014
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.1b( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.996215 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.990942 2 0.000015
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.996334 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.1b( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.15( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.1b( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.991038 2 0.000015
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.996493 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.a( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.4( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.991179 2 0.000015
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.4( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.996631 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.4( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.4( v 33'4 (0'0,33'4] local-lis/les=50/51 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.991233 2 0.000055
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.996748 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.8( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.11( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.991387 2 0.000014
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.996933 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.5( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.d( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.991444 2 0.000016
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.d( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.997162 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.d( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.991492 2 0.000011
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.997314 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.d( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.9( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.991477 2 0.000077
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.9( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 0.997142 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.2( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.8( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.991559 2 0.000015
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.8( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.997526 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.8( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.9( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.991606 2 0.000016
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.997672 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.8( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.1( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.d( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.991662 2 0.000026
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.d( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.997869 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.9( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.d( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.d( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.2( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.991923 2 0.000014
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.2( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.998083 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.2( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.2( v 33'4 (0'0,33'4] local-lis/les=50/51 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.991962 2 0.000015
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.998375 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.3( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.992064 2 0.000018
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.15( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.992121 2 0.000013
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.15( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.993207 2 0.000038
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.993276 2 0.000027
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.15( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.000927 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.001383 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.15( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.15( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.1a( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.3( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 1.000958 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.3( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.c( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.3( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.15( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.001659 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.15( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.15( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.995313 7 0.000043
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.996627 7 0.000041
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.995244 7 0.000038
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.996541 7 0.000039
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.1a( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.1f( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.1b( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.b( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.2( v 44'2 (0'0,44'2] local-lis/les=50/51 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.e( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.12( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007773 4 0.000057
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.1a( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007685 4 0.000029
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.1a( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.1a( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.18( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.1f( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007763 4 0.000084
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.1f( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.1b( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006888 4 0.000829
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.1f( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.1b( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.1f( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.1b( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.1b( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.1c( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.1e( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.11( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.15( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.1c( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.1b( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.4( v 33'4 (0'0,33'4] local-lis/les=50/51 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.8( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.a( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.11( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.5( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.d( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.b( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006946 4 0.000032
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.b( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.b( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.b( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.2( v 44'2 (0'0,44'2] local-lis/les=50/51 n=1 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006911 4 0.000041
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.2( v 44'2 (0'0,44'2] local-lis/les=50/51 n=1 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.2( v 44'2 (0'0,44'2] local-lis/les=50/51 n=1 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.2( v 44'2 (0'0,44'2] local-lis/les=50/51 n=1 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.2( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.8( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.1( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.d( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.2( v 33'4 (0'0,33'4] local-lis/les=50/51 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.9( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.15( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.1a( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.e( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006962 4 0.000048
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.e( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.e( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.e( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.12( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006972 4 0.000026
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.12( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006925 4 0.000102
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.12( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.12( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006875 4 0.000034
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.18( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007357 4 0.000371
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.18( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.18( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.18( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.1c( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006880 4 0.000141
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.1e( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006822 4 0.000026
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.1c( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.1e( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.1c( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.1e( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.1c( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.1e( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.11( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006892 4 0.000110
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.11( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.11( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.15( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006639 4 0.000029
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.11( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.15( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.15( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.15( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.1c( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006698 4 0.000167
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.1c( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.1c( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.1b( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006652 4 0.000073
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.1c( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.1b( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.1b( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.1b( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.4( v 33'4 (0'0,33'4] local-lis/les=50/51 n=1 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006577 4 0.000034
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.4( v 33'4 (0'0,33'4] local-lis/les=50/51 n=1 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.4( v 33'4 (0'0,33'4] local-lis/les=50/51 n=1 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.4( v 33'4 (0'0,33'4] local-lis/les=50/51 n=1 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.8( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006565 4 0.000024
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.8( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.8( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.8( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.a( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006640 4 0.000034
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.a( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.a( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.a( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.11( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006801 4 0.000301
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.11( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.11( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.11( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.5( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006525 4 0.000028
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.d( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006498 4 0.000042
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.5( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.d( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.5( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.5( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.d( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.d( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.2( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006513 4 0.000104
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.2( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.8( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006411 4 0.000087
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.8( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.8( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.2( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.8( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.2( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.1( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006399 4 0.000086
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.d( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006348 4 0.000098
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.1( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.d( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.1( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.d( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.1( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.d( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.9( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.006402 4 0.001168
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.9( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.2( v 33'4 (0'0,33'4] local-lis/les=50/51 n=1 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006351 4 0.000034
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.2( v 33'4 (0'0,33'4] local-lis/les=50/51 n=1 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.2( v 33'4 (0'0,33'4] local-lis/les=50/51 n=1 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000019 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.2( v 33'4 (0'0,33'4] local-lis/les=50/51 n=1 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.15( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005068 4 0.000222
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.15( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.15( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[8.15( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.1a( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005068 4 0.000168
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.1a( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.1a( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.1a( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.9( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000062 1 0.000038
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.9( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.9( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000003 0 0.000000
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.9( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.1a( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.003641 7 0.000024
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.003271 7 0.000038
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.004640 7 0.000576
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.003011 7 0.000024
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.006293 7 0.000049
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.006049 7 0.000033
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.005379 7 0.000029
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.009016 7 0.000023
Nov 26 11:59:01 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.c( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.3( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.15( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.008642 7 0.000024
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.009539 7 0.000043
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.3( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.013176 5 0.002120
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.3( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.009099 7 0.000023
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.c( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.013288 4 0.002243
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.c( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.c( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[7.c( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.009319 7 0.000024
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.008404 7 0.000035
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.009958 7 0.000026
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.15( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.013261 4 0.002127
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.15( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.15( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.15( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.009763 7 0.000032
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.009516 7 0.000024
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.013596 7 0.000026
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.9( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 44'2 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.016985 2 0.000032
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.9( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 44'2 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.9( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 44'2 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.9( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 44'2 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.3( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.008157 1 0.000056
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.3( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.3( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000015 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.3( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.096457 3 0.000026
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.096473 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.3( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 44'2 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.073500 1 0.000085
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.3( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 44'2 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.3( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 44'2 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[11.3( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 44'2 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.087047 1 0.000024
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.1] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.087167 1 0.000078
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.4] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.087242 1 0.000016
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.1e] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.087261 1 0.000051
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.16] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.084915 1 0.000022
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.7] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.084979 1 0.000016
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.8] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.085121 1 0.000013
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.17] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.095470 2 0.000015
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.082826 1 0.000019
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.12] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.095493 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.082442 1 0.000060
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.b] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.082381 1 0.000028
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.6] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.082612 1 0.000209
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.f] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.082506 1 0.000228
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.2] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.082536 1 0.000015
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.19] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.082619 1 0.000120
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.1a] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.082820 1 0.000046
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.13] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.082808 1 0.000155
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.10] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.078980 1 0.000020
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.11] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.004200 1 0.000027
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.9] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000855 1 0.000142
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.e] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.1( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.007517 1 0.000037
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.1( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.094590 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.1( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.097893 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.1] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.4( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.014854 1 0.000095
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.4( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.102093 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.4( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.105791 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.4] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.1e( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.022125 1 0.000027
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.1e( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.109421 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.1e( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.114105 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.1e] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.16( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.029488 1 0.000032
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.16( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.116790 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.16( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.119873 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.16] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.7( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.036803 1 0.000039
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.7( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.121748 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.7( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.128068 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.7] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.8( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.044109 1 0.000137
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.8( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.129117 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.8( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.135184 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.8] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.17( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.051373 1 0.000036
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.17( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.136524 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.17( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.141924 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.17] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.12( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.058857 1 0.000020
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.12( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.141710 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.12( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.150750 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.12] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.b( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.066177 1 0.000019
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.b( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.148648 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.b( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.158251 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.b] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.6( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.073415 1 0.000157
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.6( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.155896 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.6( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.165017 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.6] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.f( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.080725 1 0.000066
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.f( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.163384 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.f( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.172107 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.f] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.2( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.088103 1 0.000032
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.2( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.170677 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.2( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.179297 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.2] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.19( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.095488 1 0.000017
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.19( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.178057 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.19( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.188051 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.19] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.1a( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.102818 1 0.000021
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.1a( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.185489 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.1a( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.194849 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.1a] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.10( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.109798 1 0.000037
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.10( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.192656 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.10( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.202340 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.10] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.13( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.117330 1 0.000033
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.13( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.200203 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.13( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.210015 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.13] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.11( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.124580 1 0.000019
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.11( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.203592 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.11( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.217227 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.11] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.9( v 49'65 (0'0,49'65] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.146837 2 0.000081
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.9( v 49'65 (0'0,49'65] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.151081 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.9( v 49'65 (0'0,49'65] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started 1.238812 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.9] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.e( v 49'65 (0'0,49'65] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.154342 2 0.000078
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.e( v 49'65 (0'0,49'65] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.155264 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.e( v 49'65 (0'0,49'65] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started 1.246137 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.e] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.477900 2 0.000020
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.477960 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000064 1 0.000136
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.d] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.481377 2 0.000020
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.481400 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000036 1 0.000044
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.15] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.d( v 49'65 (0'0,49'65] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.008783 2 0.000191
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.d( v 49'65 (0'0,49'65] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.008903 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.d( v 49'65 (0'0,49'65] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started 1.483544 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.d] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.15( v 49'65 (0'0,49'65] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.020176 2 0.000120
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.15( v 49'65 (0'0,49'65] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.020270 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 51 pg[10.15( v 49'65 (0'0,49'65] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started 1.496957 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.15] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 51 heartbeat osd_stat(store_statfs(0x4fe142000/0x0/0x4ffc00000, data 0x3fed6/0x8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:31.332616+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 60080128 unmapped: 1646592 heap: 61726720 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 51 handle_osd_map epochs [51,52], i have 51, src has [1,52]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 52 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 1.033228 5 0.000027
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 52 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 1.033256 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 52 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 52 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 52 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000055 1 0.000070
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 52 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.14] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 52 pg[10.14( v 49'65 (0'0,49'65] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 DELETING pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.008854 2 0.000118
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 52 pg[10.14( v 49'65 (0'0,49'65] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.008939 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 52 pg[10.14( v 49'65 (0'0,49'65] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started 2.038776 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.14] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:32.332735+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 49 sent 47 num 2 unsent 2 sending 2
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:41:01.805222+0000 osd.2 (osd.2) 48 : cluster [DBG] 3.7 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:41:01.819363+0000 osd.2 (osd.2) 49 : cluster [DBG] 3.7 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 60096512 unmapped: 1630208 heap: 61726720 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 49) v1
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:41:01.805222+0000 osd.2 (osd.2) 48 : cluster [DBG] 3.7 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:41:01.819363+0000 osd.2 (osd.2) 49 : cluster [DBG] 3.7 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 52 handle_osd_map epochs [52,53], i have 52, src has [1,53]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:33.332900+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 60170240 unmapped: 1556480 heap: 61726720 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 417515 data_alloc: 218103808 data_used: 57344
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 53 handle_osd_map epochs [54,54], i have 53, src has [1,54]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:34.333034+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 60194816 unmapped: 1531904 heap: 61726720 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:35.333148+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 60203008 unmapped: 1523712 heap: 61726720 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:36.333253+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 60203008 unmapped: 1523712 heap: 61726720 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.924298286s of 11.032805443s, submitted: 283
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 54 heartbeat osd_stat(store_statfs(0x4fe139000/0x0/0x4ffc00000, data 0x4504f/0x93000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:37.333374+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 51 sent 49 num 2 unsent 2 sending 2
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:41:06.851289+0000 osd.2 (osd.2) 50 : cluster [DBG] 3.5 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:41:06.865396+0000 osd.2 (osd.2) 51 : cluster [DBG] 3.5 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 60211200 unmapped: 1515520 heap: 61726720 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 54 handle_osd_map epochs [54,55], i have 54, src has [1,55]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 51) v1
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:41:06.851289+0000 osd.2 (osd.2) 50 : cluster [DBG] 3.5 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:41:06.865396+0000 osd.2 (osd.2) 51 : cluster [DBG] 3.5 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:38.333504+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 60235776 unmapped: 1490944 heap: 61726720 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 422718 data_alloc: 218103808 data_used: 69632
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 55 handle_osd_map epochs [55,56], i have 55, src has [1,56]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 3.1d deep-scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 3.1d deep-scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:39.333665+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 53 sent 51 num 2 unsent 2 sending 2
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:41:08.920811+0000 osd.2 (osd.2) 52 : cluster [DBG] 3.1d deep-scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:41:08.935591+0000 osd.2 (osd.2) 53 : cluster [DBG] 3.1d deep-scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 60260352 unmapped: 1466368 heap: 61726720 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 56 handle_osd_map epochs [56,57], i have 56, src has [1,57]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 53) v1
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:41:08.920811+0000 osd.2 (osd.2) 52 : cluster [DBG] 3.1d deep-scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:41:08.935591+0000 osd.2 (osd.2) 53 : cluster [DBG] 3.1d deep-scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:40.333829+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 61358080 unmapped: 368640 heap: 61726720 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:41.333952+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 61358080 unmapped: 368640 heap: 61726720 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 57 handle_osd_map epochs [58,58], i have 57, src has [1,58]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:42.334087+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 61407232 unmapped: 319488 heap: 61726720 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 58 handle_osd_map epochs [58,59], i have 58, src has [1,59]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 59 heartbeat osd_stat(store_statfs(0x4fe12a000/0x0/0x4ffc00000, data 0x4dc05/0xa2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:43.334199+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 61415424 unmapped: 311296 heap: 61726720 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 436426 data_alloc: 218103808 data_used: 69632
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:44.334335+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 55 sent 53 num 2 unsent 2 sending 2
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:41:13.807443+0000 osd.2 (osd.2) 54 : cluster [DBG] 3.1e scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:41:13.821562+0000 osd.2 (osd.2) 55 : cluster [DBG] 3.1e scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 61415424 unmapped: 311296 heap: 61726720 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 55) v1
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:41:13.807443+0000 osd.2 (osd.2) 54 : cluster [DBG] 3.1e scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:41:13.821562+0000 osd.2 (osd.2) 55 : cluster [DBG] 3.1e scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 59 handle_osd_map epochs [60,61], i have 59, src has [1,61]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:45.334463+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 61 heartbeat osd_stat(store_statfs(0x4fe12a000/0x0/0x4ffc00000, data 0x4dc05/0xa2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 61472768 unmapped: 253952 heap: 61726720 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 61 handle_osd_map epochs [61,62], i have 61, src has [1,62]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:46.334569+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 61497344 unmapped: 229376 heap: 61726720 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:47.334687+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 61505536 unmapped: 221184 heap: 61726720 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 62 handle_osd_map epochs [62,63], i have 62, src has [1,63]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.188990593s of 11.212288857s, submitted: 13
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe121000/0x0/0x4ffc00000, data 0x53068/0xab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:48.335788+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.7(unlocked)] enter Initial
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=0 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000044 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=0 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000010 1 0.000023
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000083 1 0.000042
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000026 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000129 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.1f(unlocked)] enter Initial
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=0 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000038 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=0 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000009 1 0.000021
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000130 1 0.000047
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000073 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000238 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.17(unlocked)] enter Initial
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=0 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000022 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=0 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000009
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000064 1 0.000039
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000015 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000088 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.f(unlocked)] enter Initial
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=0 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000019 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=0 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000008
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000048 1 0.000026
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000014 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000077 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.16(unlocked)] enter Initial
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=0 pi=[45,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000028 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=0 pi=[45,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000017
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000067 1 0.000028
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000017 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000093 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.e(unlocked)] enter Initial
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=0 pi=[45,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000031 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=0 pi=[45,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000016
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000050 1 0.000029
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000015 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000074 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.6(unlocked)] enter Initial
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=0 pi=[45,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000031 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=0 pi=[45,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000014
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000062 1 0.000025
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000014 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000084 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.1e(unlocked)] enter Initial
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=0 pi=[45,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000021 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=0 pi=[45,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000013
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000046 1 0.000026
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000014 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000068 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 63 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 61382656 unmapped: 344064 heap: 61726720 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 459782 data_alloc: 218103808 data_used: 73728
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 63 handle_osd_map epochs [63,64], i have 63, src has [1,64]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 63 handle_osd_map epochs [64,64], i have 64, src has [1,64]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.745512 2 0.000029
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.745620 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.745639 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000057 1 0.000074
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.745583 2 0.000034
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.746525 2 0.000055
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.746671 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.746690 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.746189 2 0.000120
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.745690 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.746443 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.746470 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000167 1 0.000185
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000044 1 0.000068
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.744756 2 0.000031
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.744839 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.744855 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.e] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.e] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000026 1 0.000036
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.746158 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.743925 2 0.000028
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.744370 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.744388 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000449 1 0.000950
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000031 1 0.000412
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.744786 2 0.000028
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.745211 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.745229 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.6] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.6] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000027 1 0.000372
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.746745 2 0.000032
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.746850 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.746862 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=0 lpr=63 pi=[45,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000024 1 0.000035
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 64 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 64 handle_osd_map epochs [64,64], i have 64, src has [1,64]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.6] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.e] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:49.335881+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 61390848 unmapped: 335872 heap: 61726720 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 64 handle_osd_map epochs [64,65], i have 64, src has [1,65]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.e( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.008468 6 0.000040
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.e( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.e( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.6( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=6 mbc={}] exit Started/Stray 1.007727 6 0.000072
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.6( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=6 mbc={}] enter Started/ReplicaActive
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.6( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=6 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.16( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=4 mbc={}] exit Started/Stray 1.007711 6 0.000023
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.16( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.16( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.1e( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.008173 6 0.000033
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.1e( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.1e( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.8(unlocked)] enter Initial
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=65) [2] r=0 lpr=0 pi=[45,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000039 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=65) [2] r=0 lpr=0 pi=[45,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=65) [2] r=0 lpr=65 pi=[45,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000018 1 0.000038
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=65) [2] r=0 lpr=65 pi=[45,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=65) [2] r=0 lpr=65 pi=[45,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=65) [2] r=0 lpr=65 pi=[45,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=65) [2] r=0 lpr=65 pi=[45,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000088 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=65) [2] r=0 lpr=65 pi=[45,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=65) [2] r=0 lpr=65 pi=[45,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=65) [2] r=0 lpr=65 pi=[45,65)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 65 handle_osd_map epochs [65,65], i have 65, src has [1,65]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=65) [2] r=0 lpr=65 pi=[45,65)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000109 1 0.000193
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=65) [2] r=0 lpr=65 pi=[45,65)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=65) [2] r=0 lpr=65 pi=[45,65)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000037 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=65) [2] r=0 lpr=65 pi=[45,65)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000192 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=65) [2] r=0 lpr=65 pi=[45,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.f( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=7 mbc={}] exit Started/Stray 1.009528 6 0.000035
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.f( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.f( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.18(unlocked)] enter Initial
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=65) [2] r=0 lpr=0 pi=[45,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000045 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=65) [2] r=0 lpr=0 pi=[45,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=65) [2] r=0 lpr=65 pi=[45,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000040 1 0.000056
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=65) [2] r=0 lpr=65 pi=[45,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=65) [2] r=0 lpr=65 pi=[45,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=65) [2] r=0 lpr=65 pi=[45,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=65) [2] r=0 lpr=65 pi=[45,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000054 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=65) [2] r=0 lpr=65 pi=[45,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=65) [2] r=0 lpr=65 pi=[45,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=65) [2] r=0 lpr=65 pi=[45,65)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=65) [2] r=0 lpr=65 pi=[45,65)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000062 1 0.000155
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=65) [2] r=0 lpr=65 pi=[45,65)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=65) [2] r=0 lpr=65 pi=[45,65)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000026 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=65) [2] r=0 lpr=65 pi=[45,65)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000132 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=65) [2] r=0 lpr=65 pi=[45,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.e] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.17( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=3 mbc={}] exit Started/Stray 1.012940 6 0.000047
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.17( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=3 mbc={}] enter Started/ReplicaActive
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.17( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=3 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 65 handle_osd_map epochs [65,65], i have 65, src has [1,65]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.7( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.012623 6 0.000027
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.7( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.7( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.1f( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.012582 6 0.000064
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.1f( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.1f( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[6.8(unlocked)] enter Initial
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[6.8( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=65) [2] r=0 lpr=0 pi=[43,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000029 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[6.8( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=65) [2] r=0 lpr=0 pi=[43,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[6.8( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=65) [2] r=0 lpr=65 pi=[43,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000016
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[6.8( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=65) [2] r=0 lpr=65 pi=[43,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[6.8( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=65) [2] r=0 lpr=65 pi=[43,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[6.8( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=65) [2] r=0 lpr=65 pi=[43,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[6.8( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=65) [2] r=0 lpr=65 pi=[43,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[6.8( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=65) [2] r=0 lpr=65 pi=[43,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[6.8( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=65) [2] r=0 lpr=65 pi=[43,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[6.8( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=65) [2] r=0 lpr=65 pi=[43,65)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[6.8( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=65) [2] r=0 lpr=65 pi=[43,65)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000124 1 0.000145
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[6.8( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=65) [2] r=0 lpr=65 pi=[43,65)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.e( v 44'389 lc 39'58 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.004689 3 0.000107
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.e( v 44'389 lc 39'58 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 11:59:02 compute-0 ceph-osd[90047]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[6.8( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=65) [2] r=0 lpr=65 pi=[43,65)/1 crt=37'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000353 2 0.000038
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[6.8( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=65) [2] r=0 lpr=65 pi=[43,65)/1 crt=37'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.e( v 44'389 lc 39'58 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000025 1 0.000035
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[6.8( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=65) [2] r=0 lpr=65 pi=[43,65)/1 crt=37'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000009 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.e( v 44'389 lc 39'58 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[6.8( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=65) [2] r=0 lpr=65 pi=[43,65)/1 crt=37'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.6] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.f( v 44'389 lc 39'52 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.004661 3 0.000129
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.f( v 44'389 lc 39'52 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.039498 1 0.000018
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.16( v 44'389 lc 39'66 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.044037 3 0.000068
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.16( v 44'389 lc 39'66 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.f( v 44'389 lc 39'52 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.038000 1 0.000028
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.f( v 44'389 lc 39'52 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.049942 1 0.000072
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.17( v 44'389 lc 39'38 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=3 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.090442 3 0.000068
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.17( v 44'389 lc 39'38 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=3 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.16( v 44'389 lc 39'66 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.050129 1 0.000022
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.16( v 44'389 lc 39'66 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.028799 1 0.000033
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.1e( v 44'389 lc 39'220 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.122902 3 0.000139
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.1e( v 44'389 lc 39'220 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.17( v 44'389 lc 39'38 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=3 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.028968 1 0.000019
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.17( v 44'389 lc 39'38 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=3 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.021617 1 0.000061
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.1e( v 44'389 lc 39'220 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.021846 1 0.000066
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.7( v 44'389 lc 39'46 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.141127 3 0.000104
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.7( v 44'389 lc 39'46 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.1e( v 44'389 lc 39'220 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.035586 1 0.000312
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.7( v 44'389 lc 39'46 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.035638 1 0.000039
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.7( v 44'389 lc 39'46 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.6( v 44'389 lc 39'60 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=6 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.180771 3 0.000191
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.6( v 44'389 lc 39'60 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=6 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.035608 1 0.000044
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.6( v 44'389 lc 39'60 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=6 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.035663 1 0.000064
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.6( v 44'389 lc 39'60 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=6 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.1f( v 44'389 lc 39'177 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.212737 3 0.000219
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.1f( v 44'389 lc 39'177 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:50.335974+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.042824 1 0.000045
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.1f( v 44'389 lc 39'177 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.042591 1 0.000041
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.1f( v 44'389 lc 39'177 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.035869 1 0.000044
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 61800448 unmapped: 974848 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 3.e scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 3.e scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 65 handle_osd_map epochs [65,66], i have 65, src has [1,66]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 65 handle_osd_map epochs [65,66], i have 66, src has [1,66]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=65) [2] r=0 lpr=65 pi=[45,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.998342 2 0.000114
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.955470 1 0.000030
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.999775 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 2.008283 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=65) [2] r=0 lpr=65 pi=[45,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.998666 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=65) [2] r=0 lpr=65 pi=[45,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.998931 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=65) [2] r=0 lpr=65 pi=[45,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.8] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000240 1 0.000306
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[45,66)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=65) [2] r=0 lpr=65 pi=[45,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.997868 2 0.000091
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=65) [2] r=0 lpr=65 pi=[45,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.998056 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=65) [2] r=0 lpr=65 pi=[45,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.998148 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=65) [2] r=0 lpr=65 pi=[45,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.18] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[45,66)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 66 handle_osd_map epochs [66,66], i have 66, src has [1,66]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.8] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[45,66)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.002468 1 0.002736
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.18] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.002293 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[45,66)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[45,66)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.002204 1 0.002233
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[45,66)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[45,66)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[45,66)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[45,66)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000021 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[45,66)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[45,66)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[45,66)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[45,66)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000375 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[45,66)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.708097 1 0.000028
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.999833 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 2.012496 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000191 1 0.000645
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.822434 1 0.000044
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.004303 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 2.012513 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000231 1 0.001670
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.746222 1 0.000053
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.005605 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 2.013401 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[6.8( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=65) [2] r=0 lpr=65 pi=[43,65)/1 crt=37'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.001042 2 0.000178
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[6.8( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=65) [2] r=0 lpr=65 pi=[43,65)/1 crt=37'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.001563 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[6.8( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=65) [2] r=0 lpr=65 pi=[43,65)/1 crt=37'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=65/66 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=65) [2] r=0 lpr=65 pi=[43,65)/1 crt=37'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.789422 1 0.000056
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.001899 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 2.014565 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000028 1 0.000060
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.911858 1 0.000040
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.004552 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 2.014101 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000027 1 0.000039
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000015 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.861429 1 0.000093
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.002562 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 2.015525 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[53,64)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000024 1 0.000046
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.883279 1 0.000089
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.006358 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 2.014086 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[45,64)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000021 1 0.000032
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000072 2 0.000019
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.8] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.002173 2 0.000065
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.004648 2 0.002655
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.18] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.003971 1 0.004029
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 66 handle_osd_map epochs [66,66], i have 66, src has [1,66]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: merge_log_dups log.dups.size()=0olog.dups.size()=9
Nov 26 11:59:02 compute-0 ceph-osd[90047]: merge_log_dups log.dups.size()=0olog.dups.size()=10
Nov 26 11:59:02 compute-0 ceph-osd[90047]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=10
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.003264 2 0.000031
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.006308 2 0.000158
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.004265 2 0.000344
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.004091 2 0.000023
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000975 2 0.000025
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.004433 2 0.000023
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:02 compute-0 ceph-osd[90047]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=9
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.004560 2 0.000343
Nov 26 11:59:02 compute-0 ceph-osd[90047]: merge_log_dups log.dups.size()=0olog.dups.size()=11
Nov 26 11:59:02 compute-0 ceph-osd[90047]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=11
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.003882 2 0.000257
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000045 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:02 compute-0 ceph-osd[90047]: merge_log_dups log.dups.size()=0olog.dups.size()=12
Nov 26 11:59:02 compute-0 ceph-osd[90047]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=12
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000750 2 0.000014
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000117 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:02 compute-0 ceph-osd[90047]: merge_log_dups log.dups.size()=0olog.dups.size()=11
Nov 26 11:59:02 compute-0 ceph-osd[90047]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=11
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001074 2 0.000076
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000013 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:02 compute-0 ceph-osd[90047]: merge_log_dups log.dups.size()=0olog.dups.size()=13
Nov 26 11:59:02 compute-0 ceph-osd[90047]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=13
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001074 2 0.000020
Nov 26 11:59:02 compute-0 ceph-osd[90047]: merge_log_dups log.dups.size()=0olog.dups.size()=15
Nov 26 11:59:02 compute-0 ceph-osd[90047]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=15
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000975 2 0.000017
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:02 compute-0 ceph-osd[90047]: merge_log_dups log.dups.size()=0olog.dups.size()=6
Nov 26 11:59:02 compute-0 ceph-osd[90047]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=6
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001549 2 0.000020
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.001047 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=65/66 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=65) [2] r=0 lpr=65 pi=[43,65)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=65/66 n=1 ec=43/21 lis/c=65/43 les/c/f=66/45/0 sis=65) [2] r=0 lpr=65 pi=[43,65)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007279 4 0.000082
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=65/66 n=1 ec=43/21 lis/c=65/43 les/c/f=66/45/0 sis=65) [2] r=0 lpr=65 pi=[43,65)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=65/66 n=1 ec=43/21 lis/c=65/43 les/c/f=66/45/0 sis=65) [2] r=0 lpr=65 pi=[43,65)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 66 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=65/66 n=1 ec=43/21 lis/c=65/43 les/c/f=66/45/0 sis=65) [2] r=0 lpr=65 pi=[43,65)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:51.336139+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 57 sent 55 num 2 unsent 2 sending 2
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:41:20.860385+0000 osd.2 (osd.2) 56 : cluster [DBG] 3.e scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:41:20.874510+0000 osd.2 (osd.2) 57 : cluster [DBG] 3.e scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1064960 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 3.11 deep-scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 3.11 deep-scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 66 handle_osd_map epochs [67,67], i have 66, src has [1,67]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 66 handle_osd_map epochs [67,67], i have 67, src has [1,67]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.993032 2 0.001059
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.997798 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.993122 2 0.000025
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.994159 2 0.000040
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.999629 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.994883 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=66/67 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.993190 2 0.000409
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.002070 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.18( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=4 mbc={}] exit Started/Stray 1.002025 5 0.000374
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.18( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.18( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.8( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=7 mbc={}] exit Started/Stray 1.000985 5 0.001431
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.8( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.8( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=66/67 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.993670 2 0.000113
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.001114 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.992622 2 0.001182
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.999065 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=66/67 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.993763 2 0.000038
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.999208 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=66/67 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.993371 2 0.000025
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.999039 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 67 handle_osd_map epochs [67,67], i have 67, src has [1,67]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=66/67 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009444 4 0.000083
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=66/67 n=6 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009293 4 0.000364
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=66/67 n=6 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=66/67 n=6 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=66/67 n=6 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=66/67 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=66/67 n=6 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009438 4 0.000293
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=66/67 n=6 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=66/67 n=6 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=66/67 n=6 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009677 4 0.000097
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=66) [2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.18] failed. State was: not registered w/ OSD
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.8] failed. State was: not registered w/ OSD
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.18( v 44'389 lc 39'37 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[45,66)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.010050 4 0.000075
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.18( v 44'389 lc 39'37 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[45,66)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.18( v 44'389 lc 39'37 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[45,66)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000052 1 0.000024
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.18( v 44'389 lc 39'37 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[45,66)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=66/67 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=66/67 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=66/67 n=6 ec=45/34 lis/c=66/53 les/c/f=67/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009452 4 0.000065
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=66/67 n=6 ec=45/34 lis/c=66/53 les/c/f=67/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=66/67 n=6 ec=45/34 lis/c=66/53 les/c/f=67/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=66/67 n=6 ec=45/34 lis/c=66/53 les/c/f=67/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/53 les/c/f=67/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009381 4 0.000048
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/53 les/c/f=67/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/53 les/c/f=67/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009710 4 0.000086
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/53 les/c/f=67/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/53 les/c/f=67/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000019 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/53 les/c/f=67/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/53 les/c/f=67/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/53 les/c/f=67/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=66/67 n=6 ec=45/34 lis/c=66/53 les/c/f=67/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009736 4 0.000065
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=66/67 n=6 ec=45/34 lis/c=66/53 les/c/f=67/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=66/67 n=6 ec=45/34 lis/c=66/53 les/c/f=67/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=66/67 n=6 ec=45/34 lis/c=66/53 les/c/f=67/54/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 57) v1
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:41:20.860385+0000 osd.2 (osd.2) 56 : cluster [DBG] 3.e scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:41:20.874510+0000 osd.2 (osd.2) 57 : cluster [DBG] 3.e scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[45,66)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.028873 1 0.000025
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[45,66)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.8( v 44'389 lc 39'64 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[45,66)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.038980 4 0.000081
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.8( v 44'389 lc 39'64 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[45,66)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.8( v 44'389 lc 39'64 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[45,66)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000036 1 0.000080
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.8( v 44'389 lc 39'64 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[45,66)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[45,66)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.052914 1 0.000017
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 67 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[45,66)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:52.336475+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 59 sent 57 num 2 unsent 2 sending 2
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:41:21.880535+0000 osd.2 (osd.2) 58 : cluster [DBG] 3.11 deep-scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:41:21.894665+0000 osd.2 (osd.2) 59 : cluster [DBG] 3.11 deep-scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 67 heartbeat osd_stat(store_statfs(0x4fe10f000/0x0/0x4ffc00000, data 0x5a584/0xbe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 61939712 unmapped: 835584 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 67 handle_osd_map epochs [68,68], i have 67, src has [1,68]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 59) v1
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:41:21.880535+0000 osd.2 (osd.2) 58 : cluster [DBG] 3.11 deep-scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:41:21.894665+0000 osd.2 (osd.2) 59 : cluster [DBG] 3.11 deep-scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 68 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[45,66)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.977148 1 0.000024
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 68 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[45,66)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.016181 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 68 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[45,66)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 2.018263 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 68 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[45,66)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 68 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[45,66)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.924099 1 0.000060
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 68 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68) [2] r=0 lpr=68 pi=[45,68)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 68 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68) [2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000045 1 0.000074
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 68 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68) [2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 68 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[45,66)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.016188 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 68 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[45,66)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 2.018444 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 68 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[45,66)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 68 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68) [2] r=0 lpr=68 pi=[45,68)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 68 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68) [2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000094 1 0.000262
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 68 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68) [2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 68 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68) [2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 68 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68) [2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 68 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68) [2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000044 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 68 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68) [2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 68 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68) [2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 68 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68) [2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 68 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68) [2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000033 1 0.000134
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 68 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68) [2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 68 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68) [2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 68 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68) [2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 68 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68) [2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000797 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 68 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68) [2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 68 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68) [2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 68 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68) [2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 68 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68) [2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000018 1 0.000826
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 68 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68) [2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:02 compute-0 ceph-osd[90047]: merge_log_dups log.dups.size()=0olog.dups.size()=15
Nov 26 11:59:02 compute-0 ceph-osd[90047]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=15
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 68 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=66/67 n=6 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68) [2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001126 3 0.000049
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 68 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=66/67 n=6 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68) [2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 68 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=66/67 n=6 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68) [2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 68 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=66/67 n=6 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68) [2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:02 compute-0 ceph-osd[90047]: merge_log_dups log.dups.size()=0olog.dups.size()=9
Nov 26 11:59:02 compute-0 ceph-osd[90047]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=9
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 68 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68) [2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000967 3 0.000023
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 68 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68) [2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 68 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68) [2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 68 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68) [2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:53.336679+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 61956096 unmapped: 1867776 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561716 data_alloc: 218103808 data_used: 86016
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 68 handle_osd_map epochs [68,69], i have 68, src has [1,69]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 69 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=66/67 n=6 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68) [2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.010949 2 0.000040
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 69 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=66/67 n=6 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68) [2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.012164 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 69 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=66/67 n=6 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68) [2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 69 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=68/69 n=6 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68) [2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 69 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68) [2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.010708 2 0.000029
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 69 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68) [2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.011733 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 69 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68) [2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 69 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=68/69 n=5 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68) [2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 69 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=68/69 n=6 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68) [2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 69 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=68/69 n=5 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68) [2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 69 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=68/69 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001663 3 0.000084
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 69 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=68/69 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 69 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=68/69 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 69 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=68/69 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 69 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=68/69 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001671 3 0.000110
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 69 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=68/69 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 69 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=68/69 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000017 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 69 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=68/69 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 69 handle_osd_map epochs [69,69], i have 69, src has [1,69]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:54.336769+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 61 sent 59 num 2 unsent 2 sending 2
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:41:23.963299+0000 osd.2 (osd.2) 60 : cluster [DBG] 3.8 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:41:23.977334+0000 osd.2 (osd.2) 61 : cluster [DBG] 3.8 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 61931520 unmapped: 1892352 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 61) v1
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:41:23.963299+0000 osd.2 (osd.2) 60 : cluster [DBG] 3.8 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:41:23.977334+0000 osd.2 (osd.2) 61 : cluster [DBG] 3.8 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:55.336914+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 63 sent 61 num 2 unsent 2 sending 2
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:41:24.983363+0000 osd.2 (osd.2) 62 : cluster [DBG] 3.16 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:41:24.997486+0000 osd.2 (osd.2) 63 : cluster [DBG] 3.16 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 61939712 unmapped: 1884160 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 69 handle_osd_map epochs [70,70], i have 69, src has [1,70]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 70 handle_osd_map epochs [70,71], i have 70, src has [1,71]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 63) v1
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:41:24.983363+0000 osd.2 (osd.2) 62 : cluster [DBG] 3.16 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:41:24.997486+0000 osd.2 (osd.2) 63 : cluster [DBG] 3.16 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:56.337027+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 65 sent 63 num 2 unsent 2 sending 2
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:41:26.030372+0000 osd.2 (osd.2) 64 : cluster [DBG] 3.18 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:41:26.044387+0000 osd.2 (osd.2) 65 : cluster [DBG] 3.18 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 61980672 unmapped: 1843200 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 65) v1
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:41:26.030372+0000 osd.2 (osd.2) 64 : cluster [DBG] 3.18 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:41:26.044387+0000 osd.2 (osd.2) 65 : cluster [DBG] 3.18 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:57.337159+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 67 sent 65 num 2 unsent 2 sending 2
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:41:27.007126+0000 osd.2 (osd.2) 66 : cluster [DBG] 4.1a scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:41:27.021167+0000 osd.2 (osd.2) 67 : cluster [DBG] 4.1a scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 62046208 unmapped: 1777664 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 67) v1
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:41:27.007126+0000 osd.2 (osd.2) 66 : cluster [DBG] 4.1a scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:41:27.021167+0000 osd.2 (osd.2) 67 : cluster [DBG] 4.1a scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 71 handle_osd_map epochs [71,72], i have 71, src has [1,72]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.916528702s of 10.073667526s, submitted: 178
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 72 pg[9.c(unlocked)] enter Initial
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 72 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=72) [2] r=0 lpr=0 pi=[45,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000141 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 72 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=72) [2] r=0 lpr=0 pi=[45,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 72 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=72) [2] r=0 lpr=72 pi=[45,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000071
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 72 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=72) [2] r=0 lpr=72 pi=[45,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 72 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=72) [2] r=0 lpr=72 pi=[45,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 72 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=72) [2] r=0 lpr=72 pi=[45,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 72 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=72) [2] r=0 lpr=72 pi=[45,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 72 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=72) [2] r=0 lpr=72 pi=[45,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 72 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=72) [2] r=0 lpr=72 pi=[45,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 72 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=72) [2] r=0 lpr=72 pi=[45,72)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 72 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=72) [2] r=0 lpr=72 pi=[45,72)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000296 1 0.000031
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 72 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=72) [2] r=0 lpr=72 pi=[45,72)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 72 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=72) [2] r=0 lpr=72 pi=[45,72)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000039 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 72 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=72) [2] r=0 lpr=72 pi=[45,72)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000382 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 72 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=72) [2] r=0 lpr=72 pi=[45,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 72 pg[9.1c(unlocked)] enter Initial
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 72 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=72) [2] r=0 lpr=0 pi=[45,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000121 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 72 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=72) [2] r=0 lpr=0 pi=[45,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 72 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=72) [2] r=0 lpr=72 pi=[45,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000013 1 0.000031
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 72 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=72) [2] r=0 lpr=72 pi=[45,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 72 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=72) [2] r=0 lpr=72 pi=[45,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 72 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=72) [2] r=0 lpr=72 pi=[45,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 72 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=72) [2] r=0 lpr=72 pi=[45,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 72 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=72) [2] r=0 lpr=72 pi=[45,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 72 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=72) [2] r=0 lpr=72 pi=[45,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 72 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=72) [2] r=0 lpr=72 pi=[45,72)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 72 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=72) [2] r=0 lpr=72 pi=[45,72)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000090 1 0.000440
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 72 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=72) [2] r=0 lpr=72 pi=[45,72)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 72 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=72) [2] r=0 lpr=72 pi=[45,72)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000068 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 72 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=72) [2] r=0 lpr=72 pi=[45,72)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000645 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 72 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=72) [2] r=0 lpr=72 pi=[45,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 72 heartbeat osd_stat(store_statfs(0x4fe0ff000/0x0/0x4ffc00000, data 0x6328a/0xcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:58.337288+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 62177280 unmapped: 1646592 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 580007 data_alloc: 218103808 data_used: 106496
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 4.1 deep-scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 4.1 deep-scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 72 handle_osd_map epochs [72,73], i have 72, src has [1,73]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 72 handle_osd_map epochs [72,73], i have 73, src has [1,73]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 73 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=72) [2] r=0 lpr=72 pi=[45,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.999926 2 0.000509
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 73 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=72) [2] r=0 lpr=72 pi=[45,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.000627 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 73 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=72) [2] r=0 lpr=72 pi=[45,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.000911 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 73 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=72) [2] r=0 lpr=72 pi=[45,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 73 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[45,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 73 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[45,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000061 1 0.000089
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 73 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[45,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 73 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=72) [2] r=0 lpr=72 pi=[45,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.001395 2 0.000312
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 73 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=72) [2] r=0 lpr=72 pi=[45,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.001817 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 73 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=72) [2] r=0 lpr=72 pi=[45,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.001834 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 73 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=72) [2] r=0 lpr=72 pi=[45,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.c] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 73 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[45,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.c] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 73 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[45,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000055 1 0.000085
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 73 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[45,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 73 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[45,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 73 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[45,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 73 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[45,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000007 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 73 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[45,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 73 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[45,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 73 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[45,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 73 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[45,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000014 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 73 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[45,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 73 handle_osd_map epochs [73,73], i have 73, src has [1,73]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.c] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:59.337359+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 69 sent 67 num 2 unsent 2 sending 2
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:41:29.010044+0000 osd.2 (osd.2) 68 : cluster [DBG] 4.1 deep-scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:41:29.023921+0000 osd.2 (osd.2) 69 : cluster [DBG] 4.1 deep-scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 62185472 unmapped: 1638400 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 4.e scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 4.e scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 73 handle_osd_map epochs [73,74], i have 73, src has [1,74]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 74 pg[9.c( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[45,73)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.012868 5 0.000056
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 74 pg[9.c( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[45,73)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 74 pg[9.c( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[45,73)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 69) v1
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:41:29.010044+0000 osd.2 (osd.2) 68 : cluster [DBG] 4.1 deep-scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:41:29.023921+0000 osd.2 (osd.2) 69 : cluster [DBG] 4.1 deep-scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 74 pg[9.1c( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[45,73)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=7 mbc={}] exit Started/Stray 1.012696 5 0.000667
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 74 pg[9.1c( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[45,73)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 74 pg[9.1c( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[45,73)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.c] failed. State was: not registered w/ OSD
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 74 pg[9.c( v 44'389 lc 39'82 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[45,73)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.006904 4 0.000366
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 74 pg[9.c( v 44'389 lc 39'82 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[45,73)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: not registered w/ OSD
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 74 pg[9.c( v 44'389 lc 39'82 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[45,73)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000034 1 0.000036
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 74 pg[9.c( v 44'389 lc 39'82 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[45,73)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 74 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[45,73)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.035545 1 0.000240
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 74 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[45,73)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 74 pg[9.1c( v 44'389 lc 39'125 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[45,73)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.042819 4 0.000080
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 74 pg[9.1c( v 44'389 lc 39'125 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[45,73)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 74 pg[9.1c( v 44'389 lc 39'125 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[45,73)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000059 1 0.000053
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 74 pg[9.1c( v 44'389 lc 39'125 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[45,73)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 74 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[45,73)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.052500 1 0.000041
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 74 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[45,73)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:00.337476+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 71 sent 69 num 2 unsent 2 sending 2
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:41:30.013169+0000 osd.2 (osd.2) 70 : cluster [DBG] 4.e scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:41:30.027320+0000 osd.2 (osd.2) 71 : cluster [DBG] 4.e scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 62324736 unmapped: 1499136 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 71) v1
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:41:30.013169+0000 osd.2 (osd.2) 70 : cluster [DBG] 4.e scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:41:30.027320+0000 osd.2 (osd.2) 71 : cluster [DBG] 4.e scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 74 handle_osd_map epochs [75,75], i have 74, src has [1,75]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 75 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[45,73)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.913625 1 0.000036
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 75 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[45,73)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.009103 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 75 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[45,73)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 2.022439 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 75 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[45,73)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 75 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 75 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000066 1 0.000113
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 75 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 75 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 75 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 75 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 75 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 75 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 75 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 75 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000027 1 0.000031
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 75 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 75 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[45,73)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.966896 1 0.000043
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 75 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[45,73)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.009534 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 75 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[45,73)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 2.022442 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 75 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[45,73)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 75 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 75 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000029 1 0.000044
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 75 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 75 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 75 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 75 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 75 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 75 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 75 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 75 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000017 1 0.000024
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 75 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:02 compute-0 ceph-osd[90047]: merge_log_dups log.dups.size()=0olog.dups.size()=10
Nov 26 11:59:02 compute-0 ceph-osd[90047]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=10
Nov 26 11:59:02 compute-0 ceph-osd[90047]: merge_log_dups log.dups.size()=0olog.dups.size()=15
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 75 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=6 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000737 3 0.000022
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 75 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=6 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 75 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=6 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000012 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 75 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=6 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:02 compute-0 ceph-osd[90047]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=15
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 75 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=5 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001014 3 0.000035
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 75 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=5 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 75 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=5 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 75 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=5 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:01.337604+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 62341120 unmapped: 1482752 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 75 heartbeat osd_stat(store_statfs(0x4fe0f4000/0x0/0x4ffc00000, data 0x68514/0xd9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 75 handle_osd_map epochs [75,76], i have 75, src has [1,76]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 76 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=5 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.002768 2 0.000094
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 76 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=5 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.003851 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 76 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=5 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 76 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=75/76 n=5 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 76 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=6 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.003139 2 0.000070
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 76 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=6 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.003959 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 76 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=6 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 76 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=75/76 n=6 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 76 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=75/76 n=5 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 76 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=75/76 n=5 ec=45/34 lis/c=75/45 les/c/f=76/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001450 3 0.000172
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 76 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=75/76 n=5 ec=45/34 lis/c=75/45 les/c/f=76/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 76 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=75/76 n=5 ec=45/34 lis/c=75/45 les/c/f=76/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 76 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=75/76 n=5 ec=45/34 lis/c=75/45 les/c/f=76/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 76 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=75/76 n=6 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 76 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=75/76 n=6 ec=45/34 lis/c=75/45 les/c/f=76/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002065 3 0.000142
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 76 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=75/76 n=6 ec=45/34 lis/c=75/45 les/c/f=76/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 76 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=75/76 n=6 ec=45/34 lis/c=75/45 les/c/f=76/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 76 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=75/76 n=6 ec=45/34 lis/c=75/45 les/c/f=76/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 76 handle_osd_map epochs [76,76], i have 76, src has [1,76]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:02.337770+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 62341120 unmapped: 1482752 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 4.11 deep-scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 4.11 deep-scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:03.337868+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 73 sent 71 num 2 unsent 2 sending 2
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:41:33.005766+0000 osd.2 (osd.2) 72 : cluster [DBG] 4.11 deep-scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:41:33.020052+0000 osd.2 (osd.2) 73 : cluster [DBG] 4.11 deep-scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 62398464 unmapped: 1425408 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 613434 data_alloc: 218103808 data_used: 122880
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 4.13 deep-scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 4.13 deep-scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 73) v1
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:41:33.005766+0000 osd.2 (osd.2) 72 : cluster [DBG] 4.11 deep-scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:41:33.020052+0000 osd.2 (osd.2) 73 : cluster [DBG] 4.11 deep-scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:04.338078+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 75 sent 73 num 2 unsent 2 sending 2
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:41:34.038900+0000 osd.2 (osd.2) 74 : cluster [DBG] 4.13 deep-scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:41:34.053048+0000 osd.2 (osd.2) 75 : cluster [DBG] 4.13 deep-scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 62398464 unmapped: 1425408 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 75) v1
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:41:34.038900+0000 osd.2 (osd.2) 74 : cluster [DBG] 4.13 deep-scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:41:34.053048+0000 osd.2 (osd.2) 75 : cluster [DBG] 4.13 deep-scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:05.338244+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 62398464 unmapped: 1425408 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:06.338411+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 62406656 unmapped: 1417216 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 76 heartbeat osd_stat(store_statfs(0x4fe0ef000/0x0/0x4ffc00000, data 0x6b9a4/0xdf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 76 handle_osd_map epochs [77,77], i have 76, src has [1,77]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:07.338518+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 77 sent 75 num 2 unsent 2 sending 2
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:41:36.965505+0000 osd.2 (osd.2) 76 : cluster [DBG] 4.18 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:41:36.979628+0000 osd.2 (osd.2) 77 : cluster [DBG] 4.18 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 62414848 unmapped: 1409024 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 77 handle_osd_map epochs [77,78], i have 77, src has [1,78]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.988549232s of 10.043775558s, submitted: 52
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 77) v1
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:41:36.965505+0000 osd.2 (osd.2) 76 : cluster [DBG] 4.18 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:41:36.979628+0000 osd.2 (osd.2) 77 : cluster [DBG] 4.18 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:08.338674+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 62423040 unmapped: 1400832 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 625978 data_alloc: 218103808 data_used: 135168
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:09.338785+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 62447616 unmapped: 1376256 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 78 heartbeat osd_stat(store_statfs(0x4fe0e7000/0x0/0x4ffc00000, data 0x6f3d4/0xe5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 78 handle_osd_map epochs [79,79], i have 78, src has [1,79]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 78 handle_osd_map epochs [79,79], i have 79, src has [1,79]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:10.338891+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 62496768 unmapped: 1327104 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:11.338989+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 79 sent 77 num 2 unsent 2 sending 2
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:41:41.036607+0000 osd.2 (osd.2) 78 : cluster [DBG] 10.3 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:41:41.054276+0000 osd.2 (osd.2) 79 : cluster [DBG] 10.3 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 79) v1
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:41:41.036607+0000 osd.2 (osd.2) 78 : cluster [DBG] 10.3 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:41:41.054276+0000 osd.2 (osd.2) 79 : cluster [DBG] 10.3 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 62513152 unmapped: 1310720 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:12.339135+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 81 sent 79 num 2 unsent 2 sending 2
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:41:42.048896+0000 osd.2 (osd.2) 80 : cluster [DBG] 10.5 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:41:42.063010+0000 osd.2 (osd.2) 81 : cluster [DBG] 10.5 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 81) v1
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:41:42.048896+0000 osd.2 (osd.2) 80 : cluster [DBG] 10.5 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:41:42.063010+0000 osd.2 (osd.2) 81 : cluster [DBG] 10.5 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 62529536 unmapped: 1294336 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 10.a scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 10.a scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:13.339308+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 83 sent 81 num 2 unsent 2 sending 2
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:41:43.044070+0000 osd.2 (osd.2) 82 : cluster [DBG] 10.a scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:41:43.058006+0000 osd.2 (osd.2) 83 : cluster [DBG] 10.a scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 83) v1
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:41:43.044070+0000 osd.2 (osd.2) 82 : cluster [DBG] 10.a scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:41:43.058006+0000 osd.2 (osd.2) 83 : cluster [DBG] 10.a scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 62545920 unmapped: 1277952 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 630842 data_alloc: 218103808 data_used: 135168
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: handle_auth_request added challenge on 0x55fea79bfc00
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 79 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x70e07/0xe8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:14.339428+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 62496768 unmapped: 1327104 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 10.c scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 10.c scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:15.339521+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 85 sent 83 num 2 unsent 2 sending 2
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:41:44.973879+0000 osd.2 (osd.2) 84 : cluster [DBG] 10.c scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:41:44.987998+0000 osd.2 (osd.2) 85 : cluster [DBG] 10.c scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 85) v1
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:41:44.973879+0000 osd.2 (osd.2) 84 : cluster [DBG] 10.c scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:41:44.987998+0000 osd.2 (osd.2) 85 : cluster [DBG] 10.c scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 62496768 unmapped: 1327104 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:16.339675+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 79 handle_osd_map epochs [80,80], i have 79, src has [1,80]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 80 pg[6.f(unlocked)] enter Initial
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 80 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=80) [2] r=0 lpr=0 pi=[57,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000354 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 80 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=80) [2] r=0 lpr=0 pi=[57,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 80 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=80) [2] r=0 lpr=80 pi=[57,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000248 1 0.000409
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 80 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=80) [2] r=0 lpr=80 pi=[57,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 80 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=80) [2] r=0 lpr=80 pi=[57,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 80 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=80) [2] r=0 lpr=80 pi=[57,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 80 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=80) [2] r=0 lpr=80 pi=[57,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000086 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 80 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=80) [2] r=0 lpr=80 pi=[57,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 80 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=80) [2] r=0 lpr=80 pi=[57,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 80 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=80) [2] r=0 lpr=80 pi=[57,80)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 80 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=80) [2] r=0 lpr=80 pi=[57,80)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000219 1 0.000184
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 80 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=80) [2] r=0 lpr=80 pi=[57,80)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 80 pg[6.f( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=80) [2] r=0 lpr=80 pi=[57,80)/1 crt=37'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/GetLog 0.000879 2 0.000081
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 80 pg[6.f( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=80) [2] r=0 lpr=80 pi=[57,80)/1 crt=37'39 mlcod 0'0 peering m=3 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 80 pg[6.f( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=80) [2] r=0 lpr=80 pi=[57,80)/1 crt=37'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/GetMissing 0.000015 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 80 pg[6.f( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=80) [2] r=0 lpr=80 pi=[57,80)/1 crt=37'39 mlcod 0'0 peering m=3 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 62545920 unmapped: 1277952 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:17.339804+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 80 handle_osd_map epochs [81,81], i have 80, src has [1,81]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 81 pg[6.f( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=80) [2] r=0 lpr=80 pi=[57,80)/1 crt=37'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.999195 2 0.000119
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 81 pg[6.f( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=80) [2] r=0 lpr=80 pi=[57,80)/1 crt=37'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering 1.000387 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 81 pg[6.f( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=80) [2] r=0 lpr=80 pi=[57,80)/1 crt=37'39 mlcod 0'0 unknown m=3 mbc={}] enter Started/Primary/Active
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 81 pg[6.f( v 37'39 lc 33'1 (0'0,37'39] local-lis/les=80/81 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=80) [2] r=0 lpr=80 pi=[57,80)/1 crt=37'39 lcod 0'0 mlcod 0'0 activating+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/Activating
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 81 handle_osd_map epochs [80,81], i have 81, src has [1,81]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 81 pg[6.f( v 37'39 lc 33'1 (0'0,37'39] local-lis/les=80/81 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=80) [2] r=0 lpr=80 pi=[57,80)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 81 pg[6.f( v 37'39 lc 33'1 (0'0,37'39] local-lis/les=80/81 n=1 ec=43/21 lis/c=80/57 les/c/f=81/59/0 sis=80) [2] r=0 lpr=80 pi=[57,80)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/Activating 0.001506 4 0.000541
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 81 pg[6.f( v 37'39 lc 33'1 (0'0,37'39] local-lis/les=80/81 n=1 ec=43/21 lis/c=80/57 les/c/f=81/59/0 sis=80) [2] r=0 lpr=80 pi=[57,80)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 81 pg[6.f( v 37'39 lc 33'1 (0'0,37'39] local-lis/les=80/81 n=1 ec=43/21 lis/c=80/57 les/c/f=81/59/0 sis=80) [2] r=0 lpr=80 pi=[57,80)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000057 2 0.000082
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 81 pg[6.f( v 37'39 lc 33'1 (0'0,37'39] local-lis/les=80/81 n=1 ec=43/21 lis/c=80/57 les/c/f=81/59/0 sis=80) [2] r=0 lpr=80 pi=[57,80)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 81 pg[6.f( v 37'39 lc 33'1 (0'0,37'39] local-lis/les=80/81 n=1 ec=43/21 lis/c=80/57 les/c/f=81/59/0 sis=80) [2] r=0 lpr=80 pi=[57,80)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000007 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 81 pg[6.f( v 37'39 lc 33'1 (0'0,37'39] local-lis/les=80/81 n=1 ec=43/21 lis/c=80/57 les/c/f=81/59/0 sis=80) [2] r=0 lpr=80 pi=[57,80)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/Recovering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 81 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=80/81 n=1 ec=43/21 lis/c=80/57 les/c/f=81/59/0 sis=80) [2] r=0 lpr=80 pi=[57,80)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.126281 1 0.000045
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 81 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=80/81 n=1 ec=43/21 lis/c=80/57 les/c/f=81/59/0 sis=80) [2] r=0 lpr=80 pi=[57,80)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 81 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=80/81 n=1 ec=43/21 lis/c=80/57 les/c/f=81/59/0 sis=80) [2] r=0 lpr=80 pi=[57,80)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 81 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=80/81 n=1 ec=43/21 lis/c=80/57 les/c/f=81/59/0 sis=80) [2] r=0 lpr=80 pi=[57,80)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 62636032 unmapped: 1187840 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:18.339932+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 62676992 unmapped: 1146880 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 646344 data_alloc: 218103808 data_used: 147456
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:19.340050+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 62685184 unmapped: 1138688 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.717695236s of 11.755471230s, submitted: 27
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 81 heartbeat osd_stat(store_statfs(0x4fe0dc000/0x0/0x4ffc00000, data 0x74900/0xf0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 81 handle_osd_map epochs [82,82], i have 81, src has [1,82]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 81 handle_osd_map epochs [82,82], i have 82, src has [1,82]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:20.340237+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 87 sent 85 num 2 unsent 2 sending 2
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:41:49.936516+0000 osd.2 (osd.2) 86 : cluster [DBG] 10.18 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:41:49.950775+0000 osd.2 (osd.2) 87 : cluster [DBG] 10.18 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 87) v1
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:41:49.936516+0000 osd.2 (osd.2) 86 : cluster [DBG] 10.18 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:41:49.950775+0000 osd.2 (osd.2) 87 : cluster [DBG] 10.18 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 62693376 unmapped: 1130496 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:21.340443+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 82 handle_osd_map epochs [82,83], i have 82, src has [1,83]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 62709760 unmapped: 1114112 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 83 heartbeat osd_stat(store_statfs(0x4fe0d6000/0x0/0x4ffc00000, data 0x77ffa/0xf6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:22.340576+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 89 sent 87 num 2 unsent 2 sending 2
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:41:51.924917+0000 osd.2 (osd.2) 88 : cluster [DBG] 10.1b scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:41:51.938785+0000 osd.2 (osd.2) 89 : cluster [DBG] 10.1b scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 89) v1
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:41:51.924917+0000 osd.2 (osd.2) 88 : cluster [DBG] 10.1b scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:41:51.938785+0000 osd.2 (osd.2) 89 : cluster [DBG] 10.1b scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 62734336 unmapped: 1089536 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:23.340742+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 83 handle_osd_map epochs [84,84], i have 83, src has [1,84]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 84 pg[9.13(unlocked)] enter Initial
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 84 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=84) [2] r=0 lpr=0 pi=[53,84)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000046 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 84 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=84) [2] r=0 lpr=0 pi=[53,84)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 84 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=84) [2] r=0 lpr=84 pi=[53,84)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000011 1 0.000025
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 84 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=84) [2] r=0 lpr=84 pi=[53,84)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 84 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=84) [2] r=0 lpr=84 pi=[53,84)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 84 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=84) [2] r=0 lpr=84 pi=[53,84)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 84 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=84) [2] r=0 lpr=84 pi=[53,84)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 84 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=84) [2] r=0 lpr=84 pi=[53,84)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 84 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=84) [2] r=0 lpr=84 pi=[53,84)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 84 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=84) [2] r=0 lpr=84 pi=[53,84)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 84 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=84) [2] r=0 lpr=84 pi=[53,84)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000109 1 0.000034
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 84 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=84) [2] r=0 lpr=84 pi=[53,84)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 84 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=84) [2] r=0 lpr=84 pi=[53,84)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000045 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 84 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=84) [2] r=0 lpr=84 pi=[53,84)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000166 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 84 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=84) [2] r=0 lpr=84 pi=[53,84)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 62734336 unmapped: 1089536 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 658880 data_alloc: 218103808 data_used: 155648
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:24.340844+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 91 sent 89 num 2 unsent 2 sending 2
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:41:54.004577+0000 osd.2 (osd.2) 90 : cluster [DBG] 10.1c scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:41:54.018671+0000 osd.2 (osd.2) 91 : cluster [DBG] 10.1c scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 84 handle_osd_map epochs [84,85], i have 84, src has [1,85]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 85 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=84) [2] r=0 lpr=84 pi=[53,84)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.004768 2 0.000067
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 85 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=84) [2] r=0 lpr=84 pi=[53,84)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.004967 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 85 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=84) [2] r=0 lpr=84 pi=[53,84)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.005000 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 85 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=84) [2] r=0 lpr=84 pi=[53,84)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 85 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=85) [2]/[0] r=-1 lpr=85 pi=[53,85)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 85 handle_osd_map epochs [84,85], i have 85, src has [1,85]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 85 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=85) [2]/[0] r=-1 lpr=85 pi=[53,85)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000352 1 0.000413
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 85 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=85) [2]/[0] r=-1 lpr=85 pi=[53,85)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 85 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=85) [2]/[0] r=-1 lpr=85 pi=[53,85)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 85 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=85) [2]/[0] r=-1 lpr=85 pi=[53,85)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 85 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=85) [2]/[0] r=-1 lpr=85 pi=[53,85)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000091 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 85 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=85) [2]/[0] r=-1 lpr=85 pi=[53,85)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 91) v1
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:41:54.004577+0000 osd.2 (osd.2) 90 : cluster [DBG] 10.1c scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:41:54.018671+0000 osd.2 (osd.2) 91 : cluster [DBG] 10.1c scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 85 handle_osd_map epochs [85,85], i have 85, src has [1,85]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 62750720 unmapped: 1073152 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:25.341014+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 85 handle_osd_map epochs [85,86], i have 85, src has [1,86]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 86 pg[9.13( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=85) [2]/[0] r=-1 lpr=85 pi=[53,85)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.008833 6 0.000182
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 86 pg[9.13( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=85) [2]/[0] r=-1 lpr=85 pi=[53,85)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 86 pg[9.13( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=85) [2]/[0] r=-1 lpr=85 pi=[53,85)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 86 pg[9.13( v 44'389 lc 39'112 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=85/53 les/c/f=86/54/0 sis=85) [2]/[0] r=-1 lpr=85 pi=[53,85)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.002410 3 0.000115
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 86 pg[9.13( v 44'389 lc 39'112 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=85/53 les/c/f=86/54/0 sis=85) [2]/[0] r=-1 lpr=85 pi=[53,85)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 86 pg[9.13( v 44'389 lc 39'112 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=85/53 les/c/f=86/54/0 sis=85) [2]/[0] r=-1 lpr=85 pi=[53,85)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000075 1 0.000032
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 86 pg[9.13( v 44'389 lc 39'112 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=85/53 les/c/f=86/54/0 sis=85) [2]/[0] r=-1 lpr=85 pi=[53,85)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 86 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=85/53 les/c/f=86/54/0 sis=85) [2]/[0] r=-1 lpr=85 pi=[53,85)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.035573 1 0.000021
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 86 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=85/53 les/c/f=86/54/0 sis=85) [2]/[0] r=-1 lpr=85 pi=[53,85)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 62840832 unmapped: 983040 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:26.341152+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 86 handle_osd_map epochs [87,87], i have 86, src has [1,87]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 87 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=85/53 les/c/f=86/54/0 sis=85) [2]/[0] r=-1 lpr=85 pi=[53,85)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.975453 1 0.000018
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 87 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=85/53 les/c/f=86/54/0 sis=85) [2]/[0] r=-1 lpr=85 pi=[53,85)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.013578 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 87 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=85/53 les/c/f=86/54/0 sis=85) [2]/[0] r=-1 lpr=85 pi=[53,85)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 2.022548 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 87 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=85/53 les/c/f=86/54/0 sis=85) [2]/[0] r=-1 lpr=85 pi=[53,85)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 87 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=85/53 les/c/f=86/54/0 sis=87) [2] r=0 lpr=87 pi=[53,87)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 87 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=85/53 les/c/f=86/54/0 sis=87) [2] r=0 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000090 1 0.000126
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 87 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=85/53 les/c/f=86/54/0 sis=87) [2] r=0 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 87 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=85/53 les/c/f=86/54/0 sis=87) [2] r=0 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 87 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=85/53 les/c/f=86/54/0 sis=87) [2] r=0 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 87 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=85/53 les/c/f=86/54/0 sis=87) [2] r=0 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 87 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=85/53 les/c/f=86/54/0 sis=87) [2] r=0 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 87 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=85/53 les/c/f=86/54/0 sis=87) [2] r=0 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 87 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=85/53 les/c/f=86/54/0 sis=87) [2] r=0 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 87 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=85/53 les/c/f=86/54/0 sis=87) [2] r=0 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000024 1 0.000030
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 87 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=85/53 les/c/f=86/54/0 sis=87) [2] r=0 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: merge_log_dups log.dups.size()=0olog.dups.size()=11
Nov 26 11:59:02 compute-0 ceph-osd[90047]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=11
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 87 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=85/86 n=5 ec=45/34 lis/c=85/53 les/c/f=86/54/0 sis=87) [2] r=0 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000828 3 0.000033
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 87 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=85/86 n=5 ec=45/34 lis/c=85/53 les/c/f=86/54/0 sis=87) [2] r=0 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 87 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=85/86 n=5 ec=45/34 lis/c=85/53 les/c/f=86/54/0 sis=87) [2] r=0 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 87 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=85/86 n=5 ec=45/34 lis/c=85/53 les/c/f=86/54/0 sis=87) [2] r=0 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 62849024 unmapped: 974848 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:27.341293+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 93 sent 91 num 2 unsent 2 sending 2
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:41:57.021872+0000 osd.2 (osd.2) 92 : cluster [DBG] 10.1d scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:41:57.035997+0000 osd.2 (osd.2) 93 : cluster [DBG] 10.1d scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 87 handle_osd_map epochs [87,88], i have 87, src has [1,88]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 88 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=85/86 n=5 ec=45/34 lis/c=85/53 les/c/f=86/54/0 sis=87) [2] r=0 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.997275 2 0.000047
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 88 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=85/86 n=5 ec=45/34 lis/c=85/53 les/c/f=86/54/0 sis=87) [2] r=0 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.998178 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 88 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=85/86 n=5 ec=45/34 lis/c=85/53 les/c/f=86/54/0 sis=87) [2] r=0 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 88 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=87/88 n=5 ec=45/34 lis/c=85/53 les/c/f=86/54/0 sis=87) [2] r=0 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 88 handle_osd_map epochs [87,88], i have 88, src has [1,88]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 88 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=87/88 n=5 ec=45/34 lis/c=85/53 les/c/f=86/54/0 sis=87) [2] r=0 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 88 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=87/88 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=87) [2] r=0 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001448 3 0.000147
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 88 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=87/88 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=87) [2] r=0 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 88 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=87/88 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=87) [2] r=0 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 88 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=87/88 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=87) [2] r=0 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 93) v1
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:41:57.021872+0000 osd.2 (osd.2) 92 : cluster [DBG] 10.1d scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:41:57.035997+0000 osd.2 (osd.2) 93 : cluster [DBG] 10.1d scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 62849024 unmapped: 2023424 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 88 handle_osd_map epochs [88,88], i have 88, src has [1,88]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 88 heartbeat osd_stat(store_statfs(0x4fe0c7000/0x0/0x4ffc00000, data 0x807b5/0x106000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:28.341453+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 62865408 unmapped: 2007040 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 680911 data_alloc: 218103808 data_used: 176128
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 88 handle_osd_map epochs [89,89], i have 88, src has [1,89]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:29.341590+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 62906368 unmapped: 1966080 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:30.341746+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 62906368 unmapped: 1966080 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:31.341878+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 62914560 unmapped: 1957888 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 89 heartbeat osd_stat(store_statfs(0x4fe0c3000/0x0/0x4ffc00000, data 0x8221a/0x109000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 89 handle_osd_map epochs [90,91], i have 89, src has [1,91]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 89 handle_osd_map epochs [90,91], i have 91, src has [1,91]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.355766296s of 12.393082619s, submitted: 33
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 90 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=66) [2] r=0 lpr=66 crt=44'389 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 40.230391 72 0.000124
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 90 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=66) [2] r=0 lpr=66 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active 40.239900 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 90 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=66) [2] r=0 lpr=66 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary 41.237721 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 90 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=66) [2] r=0 lpr=66 crt=44'389 mlcod 0'0 active mbc={}] exit Started 41.237747 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 90 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=66) [2] r=0 lpr=66 crt=44'389 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 90 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=90 pruub=15.770151138s) [0] r=-1 lpr=90 pi=[66,90)/1 crt=44'389 mlcod 0'0 active pruub 169.793777466s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 91 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=90 pruub=15.770002365s) [0] r=-1 lpr=90 pi=[66,90)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 169.793777466s@ mbc={}] exit Reset 0.000184 2 0.000251
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 91 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=90 pruub=15.770002365s) [0] r=-1 lpr=90 pi=[66,90)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 169.793777466s@ mbc={}] enter Started
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 91 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=90 pruub=15.770002365s) [0] r=-1 lpr=90 pi=[66,90)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 169.793777466s@ mbc={}] enter Start
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 91 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=90 pruub=15.770002365s) [0] r=-1 lpr=90 pi=[66,90)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 169.793777466s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 91 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=90 pruub=15.770002365s) [0] r=-1 lpr=90 pi=[66,90)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 169.793777466s@ mbc={}] exit Start 0.000042 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 91 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=90 pruub=15.770002365s) [0] r=-1 lpr=90 pi=[66,90)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 169.793777466s@ mbc={}] enter Started/Stray
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:32.341998+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 91 handle_osd_map epochs [91,92], i have 91, src has [1,92]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 92 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=90) [0] r=-1 lpr=90 pi=[66,90)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.168343 3 0.000119
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 92 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=90) [0] r=-1 lpr=90 pi=[66,90)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.168434 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 92 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=90) [0] r=-1 lpr=90 pi=[66,90)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 92 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=92) [0]/[2] r=0 lpr=92 pi=[66,92)/2 crt=44'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 92 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=92) [0]/[2] r=0 lpr=92 pi=[66,92)/2 crt=44'389 mlcod 0'0 remapped mbc={}] exit Reset 0.000076 1 0.000101
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 92 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=92) [0]/[2] r=0 lpr=92 pi=[66,92)/2 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 92 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=92) [0]/[2] r=0 lpr=92 pi=[66,92)/2 crt=44'389 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 92 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=92) [0]/[2] r=0 lpr=92 pi=[66,92)/2 crt=44'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 92 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=92) [0]/[2] r=0 lpr=92 pi=[66,92)/2 crt=44'389 mlcod 0'0 remapped mbc={}] exit Start 0.000005 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 92 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=92) [0]/[2] r=0 lpr=92 pi=[66,92)/2 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 92 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=92) [0]/[2] r=0 lpr=92 pi=[66,92)/2 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 92 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=92) [0]/[2] r=0 lpr=92 pi=[66,92)/2 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 92 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=92) [0]/[2] r=0 lpr=92 pi=[66,92)/2 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.003837 2 0.000030
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 92 handle_osd_map epochs [92,92], i have 92, src has [1,92]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 92 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=92) [0]/[2] r=0 lpr=92 pi=[66,92)/2 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 92 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=92) [0]/[2] async=[0] r=0 lpr=92 pi=[66,92)/2 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000058 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 92 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=92) [0]/[2] async=[0] r=0 lpr=92 pi=[66,92)/2 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 92 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=92) [0]/[2] async=[0] r=0 lpr=92 pi=[66,92)/2 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000014 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 92 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=92) [0]/[2] async=[0] r=0 lpr=92 pi=[66,92)/2 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 63070208 unmapped: 1802240 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:33.342086+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 92 heartbeat osd_stat(store_statfs(0x4fcf1a000/0x0/0x4ffc00000, data 0x873bf/0x112000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 92 handle_osd_map epochs [92,93], i have 92, src has [1,93]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 92 handle_osd_map epochs [92,93], i have 93, src has [1,93]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 93 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=92) [0]/[2] async=[0] r=0 lpr=92 pi=[66,92)/2 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.001150 3 0.000183
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 93 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=92) [0]/[2] async=[0] r=0 lpr=92 pi=[66,92)/2 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.005166 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 93 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=92) [0]/[2] async=[0] r=0 lpr=92 pi=[66,92)/2 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 93 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=92/93 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=92) [0]/[2] async=[0] r=0 lpr=92 pi=[66,92)/2 crt=44'389 mlcod 0'0 activating+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 63143936 unmapped: 1728512 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695771 data_alloc: 218103808 data_used: 176128
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 93 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=92/93 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=92) [0]/[2] async=[0] r=0 lpr=92 pi=[66,92)/2 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 93 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=92/93 n=5 ec=45/34 lis/c=92/66 les/c/f=93/67/0 sis=92) [0]/[2] async=[0] r=0 lpr=92 pi=[66,92)/2 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.313604 5 0.000742
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 93 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=92/93 n=5 ec=45/34 lis/c=92/66 les/c/f=93/67/0 sis=92) [0]/[2] async=[0] r=0 lpr=92 pi=[66,92)/2 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 93 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=92/93 n=5 ec=45/34 lis/c=92/66 les/c/f=93/67/0 sis=92) [0]/[2] async=[0] r=0 lpr=92 pi=[66,92)/2 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000077 1 0.000096
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 93 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=92/93 n=5 ec=45/34 lis/c=92/66 les/c/f=93/67/0 sis=92) [0]/[2] async=[0] r=0 lpr=92 pi=[66,92)/2 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 93 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=92/93 n=5 ec=45/34 lis/c=92/66 les/c/f=93/67/0 sis=92) [0]/[2] async=[0] r=0 lpr=92 pi=[66,92)/2 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000453 1 0.000053
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 93 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=92/93 n=5 ec=45/34 lis/c=92/66 les/c/f=93/67/0 sis=92) [0]/[2] async=[0] r=0 lpr=92 pi=[66,92)/2 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 93 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=92/93 n=5 ec=45/34 lis/c=92/66 les/c/f=93/67/0 sis=92) [0]/[2] async=[0] r=0 lpr=92 pi=[66,92)/2 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.028461 2 0.000058
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 93 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=92/93 n=5 ec=45/34 lis/c=92/66 les/c/f=93/67/0 sis=92) [0]/[2] async=[0] r=0 lpr=92 pi=[66,92)/2 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:34.342182+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 95 sent 93 num 2 unsent 2 sending 2
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:42:03.900863+0000 osd.2 (osd.2) 94 : cluster [DBG] 10.1f scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:42:03.918327+0000 osd.2 (osd.2) 95 : cluster [DBG] 10.1f scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 95) v1
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:42:03.900863+0000 osd.2 (osd.2) 94 : cluster [DBG] 10.1f scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:42:03.918327+0000 osd.2 (osd.2) 95 : cluster [DBG] 10.1f scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 93 handle_osd_map epochs [94,94], i have 93, src has [1,94]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 93 handle_osd_map epochs [94,94], i have 94, src has [1,94]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=92/93 n=5 ec=45/34 lis/c=92/66 les/c/f=93/67/0 sis=92) [0]/[2] async=[0] r=0 lpr=92 pi=[66,92)/2 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.666975 1 0.000179
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=92/93 n=5 ec=45/34 lis/c=92/66 les/c/f=93/67/0 sis=92) [0]/[2] async=[0] r=0 lpr=92 pi=[66,92)/2 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active 1.009851 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=92/93 n=5 ec=45/34 lis/c=92/66 les/c/f=93/67/0 sis=92) [0]/[2] async=[0] r=0 lpr=92 pi=[66,92)/2 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary 2.015031 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=92/93 n=5 ec=45/34 lis/c=92/66 les/c/f=93/67/0 sis=92) [0]/[2] async=[0] r=0 lpr=92 pi=[66,92)/2 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started 2.015053 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=92/93 n=5 ec=45/34 lis/c=92/66 les/c/f=93/67/0 sis=92) [0]/[2] async=[0] r=0 lpr=92 pi=[66,92)/2 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] enter Reset
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=92/93 n=5 ec=45/34 lis/c=92/66 les/c/f=93/67/0 sis=94 pruub=15.303225517s) [0] async=[0] r=-1 lpr=94 pi=[66,94)/2 crt=44'389 mlcod 44'389 active pruub 171.510635376s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=92/93 n=5 ec=45/34 lis/c=92/66 les/c/f=93/67/0 sis=94 pruub=15.303150177s) [0] r=-1 lpr=94 pi=[66,94)/2 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 171.510635376s@ mbc={}] exit Reset 0.000109 1 0.000160
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=92/93 n=5 ec=45/34 lis/c=92/66 les/c/f=93/67/0 sis=94 pruub=15.303150177s) [0] r=-1 lpr=94 pi=[66,94)/2 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 171.510635376s@ mbc={}] enter Started
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=92/93 n=5 ec=45/34 lis/c=92/66 les/c/f=93/67/0 sis=94 pruub=15.303150177s) [0] r=-1 lpr=94 pi=[66,94)/2 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 171.510635376s@ mbc={}] enter Start
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=92/93 n=5 ec=45/34 lis/c=92/66 les/c/f=93/67/0 sis=94 pruub=15.303150177s) [0] r=-1 lpr=94 pi=[66,94)/2 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 171.510635376s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=92/93 n=5 ec=45/34 lis/c=92/66 les/c/f=93/67/0 sis=94 pruub=15.303150177s) [0] r=-1 lpr=94 pi=[66,94)/2 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 171.510635376s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=92/93 n=5 ec=45/34 lis/c=92/66 les/c/f=93/67/0 sis=94 pruub=15.303150177s) [0] r=-1 lpr=94 pi=[66,94)/2 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 171.510635376s@ mbc={}] enter Started/Stray
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 63168512 unmapped: 1703936 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:35.342310+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 97 sent 95 num 2 unsent 2 sending 2
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:42:04.852317+0000 osd.2 (osd.2) 96 : cluster [DBG] 7.1a scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:42:04.866450+0000 osd.2 (osd.2) 97 : cluster [DBG] 7.1a scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 97) v1
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:42:04.852317+0000 osd.2 (osd.2) 96 : cluster [DBG] 7.1a scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:42:04.866450+0000 osd.2 (osd.2) 97 : cluster [DBG] 7.1a scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 63168512 unmapped: 1703936 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 94 heartbeat osd_stat(store_statfs(0x4fcf14000/0x0/0x4ffc00000, data 0x8a853/0x118000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 94 handle_osd_map epochs [95,95], i have 94, src has [1,95]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 95 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=92/93 n=5 ec=45/34 lis/c=92/66 les/c/f=93/67/0 sis=94) [0] r=-1 lpr=94 pi=[66,94)/2 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.215222 6 0.000313
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 95 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=92/93 n=5 ec=45/34 lis/c=92/66 les/c/f=93/67/0 sis=94) [0] r=-1 lpr=94 pi=[66,94)/2 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 95 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=92/93 n=5 ec=45/34 lis/c=92/66 les/c/f=93/67/0 sis=94) [0] r=-1 lpr=94 pi=[66,94)/2 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 95 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=92/93 n=5 ec=45/34 lis/c=92/66 les/c/f=93/67/0 sis=94) [0] r=-1 lpr=94 pi=[66,94)/2 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000770 2 0.000042
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 95 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=92/93 n=5 ec=45/34 lis/c=92/66 les/c/f=93/67/0 sis=94) [0] r=-1 lpr=94 pi=[66,94)/2 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: not registered w/ OSD
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 95 pg[9.16( v 44'389 (0'0,44'389] lb MIN local-lis/les=92/93 n=5 ec=45/34 lis/c=92/66 les/c/f=93/67/0 sis=94) [0] r=-1 lpr=94 DELETING pi=[66,94)/2 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.031261 2 0.000135
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 95 pg[9.16( v 44'389 (0'0,44'389] lb MIN local-lis/les=92/93 n=5 ec=45/34 lis/c=92/66 les/c/f=93/67/0 sis=94) [0] r=-1 lpr=94 pi=[66,94)/2 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.032064 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 95 pg[9.16( v 44'389 (0'0,44'389] lb MIN local-lis/les=92/93 n=5 ec=45/34 lis/c=92/66 les/c/f=93/67/0 sis=94) [0] r=-1 lpr=94 pi=[66,94)/2 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.247326 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: not registered w/ OSD
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:36.342435+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 99 sent 97 num 2 unsent 2 sending 2
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:42:05.803813+0000 osd.2 (osd.2) 98 : cluster [DBG] 11.15 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:42:05.817979+0000 osd.2 (osd.2) 99 : cluster [DBG] 11.15 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 99) v1
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:42:05.803813+0000 osd.2 (osd.2) 98 : cluster [DBG] 11.15 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:42:05.817979+0000 osd.2 (osd.2) 99 : cluster [DBG] 11.15 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 63168512 unmapped: 1703936 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:37.342565+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 95 handle_osd_map epochs [95,96], i have 95, src has [1,96]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 63168512 unmapped: 1703936 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 96 heartbeat osd_stat(store_statfs(0x4fcf0f000/0x0/0x4ffc00000, data 0x8dd95/0x11d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:38.342679+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 101 sent 99 num 2 unsent 2 sending 2
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:42:07.804005+0000 osd.2 (osd.2) 100 : cluster [DBG] 8.15 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:42:07.818109+0000 osd.2 (osd.2) 101 : cluster [DBG] 8.15 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 101) v1
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:42:07.804005+0000 osd.2 (osd.2) 100 : cluster [DBG] 8.15 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:42:07.818109+0000 osd.2 (osd.2) 101 : cluster [DBG] 8.15 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 63176704 unmapped: 1695744 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 702716 data_alloc: 218103808 data_used: 184320
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 96 heartbeat osd_stat(store_statfs(0x4fcf0f000/0x0/0x4ffc00000, data 0x8dd95/0x11d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:39.342817+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 96 handle_osd_map epochs [96,97], i have 96, src has [1,97]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 63201280 unmapped: 1671168 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:40.342924+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 63201280 unmapped: 1671168 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:41.343026+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 97 handle_osd_map epochs [98,98], i have 97, src has [1,98]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 98 pg[9.19(unlocked)] enter Initial
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 98 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=97) [2] r=0 lpr=0 pi=[53,97)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000063 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 98 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=97) [2] r=0 lpr=0 pi=[53,97)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 98 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=97) [2] r=0 lpr=98 pi=[53,97)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000019 1 0.000037
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 98 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=97) [2] r=0 lpr=98 pi=[53,97)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 98 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=97) [2] r=0 lpr=98 pi=[53,97)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 98 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=97) [2] r=0 lpr=98 pi=[53,97)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 98 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=97) [2] r=0 lpr=98 pi=[53,97)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000227 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 98 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=97) [2] r=0 lpr=98 pi=[53,97)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 98 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=97) [2] r=0 lpr=98 pi=[53,97)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 98 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=97) [2] r=0 lpr=98 pi=[53,97)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 98 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=97) [2] r=0 lpr=98 pi=[53,97)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000090 1 0.000320
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 98 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=97) [2] r=0 lpr=98 pi=[53,97)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 98 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=97) [2] r=0 lpr=98 pi=[53,97)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000033 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 98 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=97) [2] r=0 lpr=98 pi=[53,97)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000176 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 98 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=97) [2] r=0 lpr=98 pi=[53,97)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 98 handle_osd_map epochs [97,98], i have 98, src has [1,98]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 63234048 unmapped: 1638400 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:42.343171+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 98 handle_osd_map epochs [98,99], i have 98, src has [1,99]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.179075241s of 10.219714165s, submitted: 69
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 98 handle_osd_map epochs [99,99], i have 99, src has [1,99]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=97) [2] r=0 lpr=98 pi=[53,97)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.945007 2 0.000104
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=97) [2] r=0 lpr=98 pi=[53,97)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.945225 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=97) [2] r=0 lpr=98 pi=[53,97)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.945492 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=97) [2] r=0 lpr=98 pi=[53,97)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=99) [2]/[0] r=-1 lpr=99 pi=[53,99)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=99) [2]/[0] r=-1 lpr=99 pi=[53,99)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000074 1 0.000104
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=99) [2]/[0] r=-1 lpr=99 pi=[53,99)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=99) [2]/[0] r=-1 lpr=99 pi=[53,99)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=99) [2]/[0] r=-1 lpr=99 pi=[53,99)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=99) [2]/[0] r=-1 lpr=99 pi=[53,99)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=99) [2]/[0] r=-1 lpr=99 pi=[53,99)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 63250432 unmapped: 1622016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 11.3 deep-scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 11.3 deep-scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:43.343291+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 103 sent 101 num 2 unsent 2 sending 2
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:42:12.793233+0000 osd.2 (osd.2) 102 : cluster [DBG] 11.3 deep-scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:42:12.810914+0000 osd.2 (osd.2) 103 : cluster [DBG] 11.3 deep-scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 103) v1
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:42:12.793233+0000 osd.2 (osd.2) 102 : cluster [DBG] 11.3 deep-scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:42:12.810914+0000 osd.2 (osd.2) 103 : cluster [DBG] 11.3 deep-scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 63250432 unmapped: 1622016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 713462 data_alloc: 218103808 data_used: 184320
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _renew_subs
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 99 handle_osd_map epochs [100,100], i have 99, src has [1,100]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 100 pg[9.19( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=99) [2]/[0] r=-1 lpr=99 pi=[53,99)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=7 mbc={}] exit Started/Stray 1.233686 5 0.000034
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 100 pg[9.19( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=99) [2]/[0] r=-1 lpr=99 pi=[53,99)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 100 pg[9.19( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=99) [2]/[0] r=-1 lpr=99 pi=[53,99)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: not registered w/ OSD
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 100 pg[9.19( v 44'389 lc 39'69 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=99/53 les/c/f=100/54/0 sis=99) [2]/[0] r=-1 lpr=99 pi=[53,99)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.002602 4 0.000105
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 100 pg[9.19( v 44'389 lc 39'69 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=99/53 les/c/f=100/54/0 sis=99) [2]/[0] r=-1 lpr=99 pi=[53,99)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 100 pg[9.19( v 44'389 lc 39'69 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=99/53 les/c/f=100/54/0 sis=99) [2]/[0] r=-1 lpr=99 pi=[53,99)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000323 1 0.000031
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 100 pg[9.19( v 44'389 lc 39'69 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=99/53 les/c/f=100/54/0 sis=99) [2]/[0] r=-1 lpr=99 pi=[53,99)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 100 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=99/53 les/c/f=100/54/0 sis=99) [2]/[0] r=-1 lpr=99 pi=[53,99)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.049475 1 0.000250
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 100 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=99/53 les/c/f=100/54/0 sis=99) [2]/[0] r=-1 lpr=99 pi=[53,99)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:44.343428+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 100 handle_osd_map epochs [100,101], i have 100, src has [1,101]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 101 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=99/53 les/c/f=100/54/0 sis=99) [2]/[0] r=-1 lpr=99 pi=[53,99)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.724037 1 0.000024
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 101 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=99/53 les/c/f=100/54/0 sis=99) [2]/[0] r=-1 lpr=99 pi=[53,99)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.776615 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 101 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=99/53 les/c/f=100/54/0 sis=99) [2]/[0] r=-1 lpr=99 pi=[53,99)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 2.010330 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 101 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=99/53 les/c/f=100/54/0 sis=99) [2]/[0] r=-1 lpr=99 pi=[53,99)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 101 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=99/53 les/c/f=100/54/0 sis=101) [2] r=0 lpr=101 pi=[53,101)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 101 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=99/53 les/c/f=100/54/0 sis=101) [2] r=0 lpr=101 pi=[53,101)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000058 1 0.000087
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 101 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=99/53 les/c/f=100/54/0 sis=101) [2] r=0 lpr=101 pi=[53,101)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 101 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=99/53 les/c/f=100/54/0 sis=101) [2] r=0 lpr=101 pi=[53,101)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 101 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=99/53 les/c/f=100/54/0 sis=101) [2] r=0 lpr=101 pi=[53,101)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 101 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=99/53 les/c/f=100/54/0 sis=101) [2] r=0 lpr=101 pi=[53,101)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 101 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=99/53 les/c/f=100/54/0 sis=101) [2] r=0 lpr=101 pi=[53,101)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 101 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=99/53 les/c/f=100/54/0 sis=101) [2] r=0 lpr=101 pi=[53,101)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 101 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=99/53 les/c/f=100/54/0 sis=101) [2] r=0 lpr=101 pi=[53,101)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 101 handle_osd_map epochs [101,101], i have 101, src has [1,101]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 101 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=99/53 les/c/f=100/54/0 sis=101) [2] r=0 lpr=101 pi=[53,101)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.001262 2 0.000028
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 101 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=99/53 les/c/f=100/54/0 sis=101) [2] r=0 lpr=101 pi=[53,101)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:02 compute-0 ceph-osd[90047]: merge_log_dups log.dups.size()=0olog.dups.size()=15
Nov 26 11:59:02 compute-0 ceph-osd[90047]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=15
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 101 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=99/100 n=5 ec=45/34 lis/c=99/53 les/c/f=100/54/0 sis=101) [2] r=0 lpr=101 pi=[53,101)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000772 2 0.000104
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 101 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=99/100 n=5 ec=45/34 lis/c=99/53 les/c/f=100/54/0 sis=101) [2] r=0 lpr=101 pi=[53,101)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 101 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=99/100 n=5 ec=45/34 lis/c=99/53 les/c/f=100/54/0 sis=101) [2] r=0 lpr=101 pi=[53,101)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 101 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=99/100 n=5 ec=45/34 lis/c=99/53 les/c/f=100/54/0 sis=101) [2] r=0 lpr=101 pi=[53,101)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 63315968 unmapped: 1556480 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 101 heartbeat osd_stat(store_statfs(0x4fcefe000/0x0/0x4ffc00000, data 0x96548/0x12d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:45.343531+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 101 handle_osd_map epochs [101,102], i have 101, src has [1,102]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 101 handle_osd_map epochs [102,102], i have 102, src has [1,102]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=99/100 n=5 ec=45/34 lis/c=99/53 les/c/f=100/54/0 sis=101) [2] r=0 lpr=101 pi=[53,101)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.004008 2 0.000069
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=99/100 n=5 ec=45/34 lis/c=99/53 les/c/f=100/54/0 sis=101) [2] r=0 lpr=101 pi=[53,101)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.006147 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=99/100 n=5 ec=45/34 lis/c=99/53 les/c/f=100/54/0 sis=101) [2] r=0 lpr=101 pi=[53,101)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=101/102 n=5 ec=45/34 lis/c=99/53 les/c/f=100/54/0 sis=101) [2] r=0 lpr=101 pi=[53,101)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 102 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=75/76 n=5 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=75) [2] r=0 lpr=75 crt=44'389 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 43.399544 80 0.000182
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 102 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=75/76 n=5 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=75) [2] r=0 lpr=75 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active 43.401079 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 102 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=75/76 n=5 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=75) [2] r=0 lpr=75 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary 44.404942 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 102 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=75/76 n=5 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=75) [2] r=0 lpr=75 crt=44'389 mlcod 0'0 active mbc={}] exit Started 44.404958 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 102 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=75/76 n=5 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=75) [2] r=0 lpr=75 crt=44'389 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 102 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=75/76 n=5 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=102 pruub=12.600892067s) [0] r=-1 lpr=102 pi=[75,102)/1 crt=44'389 mlcod 0'0 active pruub 179.860031128s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 102 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=75/76 n=5 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=102 pruub=12.600855827s) [0] r=-1 lpr=102 pi=[75,102)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 179.860031128s@ mbc={}] exit Reset 0.000059 1 0.000082
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 102 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=75/76 n=5 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=102 pruub=12.600855827s) [0] r=-1 lpr=102 pi=[75,102)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 179.860031128s@ mbc={}] enter Started
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 102 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=75/76 n=5 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=102 pruub=12.600855827s) [0] r=-1 lpr=102 pi=[75,102)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 179.860031128s@ mbc={}] enter Start
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 102 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=75/76 n=5 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=102 pruub=12.600855827s) [0] r=-1 lpr=102 pi=[75,102)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 179.860031128s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 102 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=75/76 n=5 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=102 pruub=12.600855827s) [0] r=-1 lpr=102 pi=[75,102)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 179.860031128s@ mbc={}] exit Start 0.000215 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 102 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=75/76 n=5 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=102 pruub=12.600855827s) [0] r=-1 lpr=102 pi=[75,102)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 179.860031128s@ mbc={}] enter Started/Stray
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=101/102 n=5 ec=45/34 lis/c=99/53 les/c/f=100/54/0 sis=101) [2] r=0 lpr=101 pi=[53,101)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=101/102 n=5 ec=45/34 lis/c=101/53 les/c/f=102/54/0 sis=101) [2] r=0 lpr=101 pi=[53,101)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002936 4 0.000077
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=101/102 n=5 ec=45/34 lis/c=101/53 les/c/f=102/54/0 sis=101) [2] r=0 lpr=101 pi=[53,101)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=101/102 n=5 ec=45/34 lis/c=101/53 les/c/f=102/54/0 sis=101) [2] r=0 lpr=101 pi=[53,101)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=101/102 n=5 ec=45/34 lis/c=101/53 les/c/f=102/54/0 sis=101) [2] r=0 lpr=101 pi=[53,101)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 63332352 unmapped: 1540096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 102 heartbeat osd_stat(store_statfs(0x4fcaed000/0x0/0x4ffc00000, data 0x980cd/0x130000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:46.343671+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 102 handle_osd_map epochs [103,103], i have 102, src has [1,103]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 103 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=75/76 n=5 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=102) [0] r=-1 lpr=102 pi=[75,102)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.960837 3 0.000247
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 103 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=75/76 n=5 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=102) [0] r=-1 lpr=102 pi=[75,102)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.961080 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 103 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=75/76 n=5 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=102) [0] r=-1 lpr=102 pi=[75,102)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 103 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=75/76 n=5 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=103) [0]/[2] r=0 lpr=103 pi=[75,103)/1 crt=44'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 103 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=75/76 n=5 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=103) [0]/[2] r=0 lpr=103 pi=[75,103)/1 crt=44'389 mlcod 0'0 remapped mbc={}] exit Reset 0.000047 1 0.000082
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 103 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=75/76 n=5 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=103) [0]/[2] r=0 lpr=103 pi=[75,103)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 103 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=75/76 n=5 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=103) [0]/[2] r=0 lpr=103 pi=[75,103)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 103 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=75/76 n=5 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=103) [0]/[2] r=0 lpr=103 pi=[75,103)/1 crt=44'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 103 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=75/76 n=5 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=103) [0]/[2] r=0 lpr=103 pi=[75,103)/1 crt=44'389 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 103 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=75/76 n=5 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=103) [0]/[2] r=0 lpr=103 pi=[75,103)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 103 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=75/76 n=5 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=103) [0]/[2] r=0 lpr=103 pi=[75,103)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 103 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=75/76 n=5 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=103) [0]/[2] r=0 lpr=103 pi=[75,103)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 103 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=75/76 n=5 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=103) [0]/[2] r=0 lpr=103 pi=[75,103)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000025 1 0.000029
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 103 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=75/76 n=5 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=103) [0]/[2] r=0 lpr=103 pi=[75,103)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 103 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=75/76 n=5 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=103) [0]/[2] async=[0] r=0 lpr=103 pi=[75,103)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000019 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 103 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=75/76 n=5 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=103) [0]/[2] async=[0] r=0 lpr=103 pi=[75,103)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 103 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=75/76 n=5 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=103) [0]/[2] async=[0] r=0 lpr=103 pi=[75,103)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 103 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=75/76 n=5 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=103) [0]/[2] async=[0] r=0 lpr=103 pi=[75,103)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 63348736 unmapped: 1523712 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:47.343813+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 103 handle_osd_map epochs [103,104], i have 103, src has [1,104]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 104 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=75/76 n=5 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=103) [0]/[2] async=[0] r=0 lpr=103 pi=[75,103)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.999676 4 0.000048
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 104 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=75/76 n=5 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=103) [0]/[2] async=[0] r=0 lpr=103 pi=[75,103)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.999756 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 104 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=75/76 n=5 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=103) [0]/[2] async=[0] r=0 lpr=103 pi=[75,103)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 104 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=103/104 n=5 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=103) [0]/[2] async=[0] r=0 lpr=103 pi=[75,103)/1 crt=44'389 mlcod 0'0 activating+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Activating
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 63365120 unmapped: 1507328 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 104 handle_osd_map epochs [104,104], i have 104, src has [1,104]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 104 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=103/104 n=5 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=103) [0]/[2] async=[0] r=0 lpr=103 pi=[75,103)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 104 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=103/104 n=5 ec=45/34 lis/c=103/75 les/c/f=104/76/0 sis=103) [0]/[2] async=[0] r=0 lpr=103 pi=[75,103)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/Activating 0.264553 5 0.000733
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 104 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=103/104 n=5 ec=45/34 lis/c=103/75 les/c/f=104/76/0 sis=103) [0]/[2] async=[0] r=0 lpr=103 pi=[75,103)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 104 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=103/104 n=5 ec=45/34 lis/c=103/75 les/c/f=104/76/0 sis=103) [0]/[2] async=[0] r=0 lpr=103 pi=[75,103)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000069 1 0.000056
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 104 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=103/104 n=5 ec=45/34 lis/c=103/75 les/c/f=104/76/0 sis=103) [0]/[2] async=[0] r=0 lpr=103 pi=[75,103)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 104 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=103/104 n=5 ec=45/34 lis/c=103/75 les/c/f=104/76/0 sis=103) [0]/[2] async=[0] r=0 lpr=103 pi=[75,103)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000381 1 0.000024
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 104 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=103/104 n=5 ec=45/34 lis/c=103/75 les/c/f=104/76/0 sis=103) [0]/[2] async=[0] r=0 lpr=103 pi=[75,103)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Recovering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 104 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=103/104 n=5 ec=45/34 lis/c=103/75 les/c/f=104/76/0 sis=103) [0]/[2] async=[0] r=0 lpr=103 pi=[75,103)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.049499 2 0.000072
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 104 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=103/104 n=5 ec=45/34 lis/c=103/75 les/c/f=104/76/0 sis=103) [0]/[2] async=[0] r=0 lpr=103 pi=[75,103)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:48.343910+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 104 handle_osd_map epochs [105,105], i have 104, src has [1,105]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=103/104 n=5 ec=45/34 lis/c=103/75 les/c/f=104/76/0 sis=103) [0]/[2] async=[0] r=0 lpr=103 pi=[75,103)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.731844 1 0.000084
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=103/104 n=5 ec=45/34 lis/c=103/75 les/c/f=104/76/0 sis=103) [0]/[2] async=[0] r=0 lpr=103 pi=[75,103)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active 1.046619 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=103/104 n=5 ec=45/34 lis/c=103/75 les/c/f=104/76/0 sis=103) [0]/[2] async=[0] r=0 lpr=103 pi=[75,103)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary 2.046392 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=103/104 n=5 ec=45/34 lis/c=103/75 les/c/f=104/76/0 sis=103) [0]/[2] async=[0] r=0 lpr=103 pi=[75,103)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started 2.046414 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=103/104 n=5 ec=45/34 lis/c=103/75 les/c/f=104/76/0 sis=103) [0]/[2] async=[0] r=0 lpr=103 pi=[75,103)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] enter Reset
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=103/104 n=5 ec=45/34 lis/c=103/75 les/c/f=104/76/0 sis=105 pruub=15.217405319s) [0] async=[0] r=-1 lpr=105 pi=[75,105)/1 crt=44'389 mlcod 44'389 active pruub 185.484176636s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=103/104 n=5 ec=45/34 lis/c=103/75 les/c/f=104/76/0 sis=105 pruub=15.217340469s) [0] r=-1 lpr=105 pi=[75,105)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 185.484176636s@ mbc={}] exit Reset 0.000096 1 0.000137
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=103/104 n=5 ec=45/34 lis/c=103/75 les/c/f=104/76/0 sis=105 pruub=15.217340469s) [0] r=-1 lpr=105 pi=[75,105)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 185.484176636s@ mbc={}] enter Started
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=103/104 n=5 ec=45/34 lis/c=103/75 les/c/f=104/76/0 sis=105 pruub=15.217340469s) [0] r=-1 lpr=105 pi=[75,105)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 185.484176636s@ mbc={}] enter Start
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=103/104 n=5 ec=45/34 lis/c=103/75 les/c/f=104/76/0 sis=105 pruub=15.217340469s) [0] r=-1 lpr=105 pi=[75,105)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 185.484176636s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=103/104 n=5 ec=45/34 lis/c=103/75 les/c/f=104/76/0 sis=105 pruub=15.217340469s) [0] r=-1 lpr=105 pi=[75,105)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 185.484176636s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=103/104 n=5 ec=45/34 lis/c=103/75 les/c/f=104/76/0 sis=105 pruub=15.217340469s) [0] r=-1 lpr=105 pi=[75,105)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 185.484176636s@ mbc={}] enter Started/Stray
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 105 handle_osd_map epochs [105,105], i have 105, src has [1,105]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 64454656 unmapped: 417792 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 742213 data_alloc: 218103808 data_used: 188416
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 7.c scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 7.c scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:49.344002+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 105 sent 103 num 2 unsent 2 sending 2
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:42:18.827454+0000 osd.2 (osd.2) 104 : cluster [DBG] 7.c scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:42:18.841586+0000 osd.2 (osd.2) 105 : cluster [DBG] 7.c scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 105) v1
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:42:18.827454+0000 osd.2 (osd.2) 104 : cluster [DBG] 7.c scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:42:18.841586+0000 osd.2 (osd.2) 105 : cluster [DBG] 7.c scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 64462848 unmapped: 409600 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _renew_subs
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 105 handle_osd_map epochs [106,106], i have 105, src has [1,106]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 106 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=103/104 n=5 ec=45/34 lis/c=103/75 les/c/f=104/76/0 sis=105) [0] r=-1 lpr=105 pi=[75,105)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.220154 6 0.000460
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 106 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=103/104 n=5 ec=45/34 lis/c=103/75 les/c/f=104/76/0 sis=105) [0] r=-1 lpr=105 pi=[75,105)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 106 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=103/104 n=5 ec=45/34 lis/c=103/75 les/c/f=104/76/0 sis=105) [0] r=-1 lpr=105 pi=[75,105)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 106 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=103/104 n=5 ec=45/34 lis/c=103/75 les/c/f=104/76/0 sis=105) [0] r=-1 lpr=105 pi=[75,105)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000671 2 0.000032
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 106 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=103/104 n=5 ec=45/34 lis/c=103/75 les/c/f=104/76/0 sis=105) [0] r=-1 lpr=105 pi=[75,105)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: not registered w/ OSD
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 106 pg[9.1c( v 44'389 (0'0,44'389] lb MIN local-lis/les=103/104 n=5 ec=45/34 lis/c=103/75 les/c/f=104/76/0 sis=105) [0] r=-1 lpr=105 DELETING pi=[75,105)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.053199 2 0.000109
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 106 pg[9.1c( v 44'389 (0'0,44'389] lb MIN local-lis/les=103/104 n=5 ec=45/34 lis/c=103/75 les/c/f=104/76/0 sis=105) [0] r=-1 lpr=105 pi=[75,105)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.053914 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 106 pg[9.1c( v 44'389 (0'0,44'389] lb MIN local-lis/les=103/104 n=5 ec=45/34 lis/c=103/75 les/c/f=104/76/0 sis=105) [0] r=-1 lpr=105 pi=[75,105)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.274118 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: not registered w/ OSD
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:50.344124+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 64462848 unmapped: 409600 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:51.344220+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 107 sent 105 num 2 unsent 2 sending 2
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:42:20.754172+0000 osd.2 (osd.2) 106 : cluster [DBG] 8.2 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:42:20.768355+0000 osd.2 (osd.2) 107 : cluster [DBG] 8.2 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 107) v1
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:42:20.754172+0000 osd.2 (osd.2) 106 : cluster [DBG] 8.2 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:42:20.768355+0000 osd.2 (osd.2) 107 : cluster [DBG] 8.2 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 64462848 unmapped: 409600 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 11.d deep-scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 11.d deep-scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fcae3000/0x0/0x4ffc00000, data 0x9eadd/0x13b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 106 handle_osd_map epochs [107,107], i have 106, src has [1,107]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 107 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=66) [2] r=0 lpr=66 crt=44'389 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 59.931263 123 0.000234
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 107 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=66) [2] r=0 lpr=66 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active 59.940995 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 107 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=66) [2] r=0 lpr=66 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary 60.940635 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 107 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=66) [2] r=0 lpr=66 crt=44'389 mlcod 0'0 active mbc={}] exit Started 60.940679 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 107 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=66) [2] r=0 lpr=66 crt=44'389 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 107 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=107 pruub=12.069294930s) [0] r=-1 lpr=107 pi=[66,107)/1 crt=44'389 mlcod 0'0 active pruub 185.794097900s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 107 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=107 pruub=12.069259644s) [0] r=-1 lpr=107 pi=[66,107)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 185.794097900s@ mbc={}] exit Reset 0.000066 1 0.000112
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 107 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=107 pruub=12.069259644s) [0] r=-1 lpr=107 pi=[66,107)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 185.794097900s@ mbc={}] enter Started
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 107 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=107 pruub=12.069259644s) [0] r=-1 lpr=107 pi=[66,107)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 185.794097900s@ mbc={}] enter Start
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 107 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=107 pruub=12.069259644s) [0] r=-1 lpr=107 pi=[66,107)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 185.794097900s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 107 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=107 pruub=12.069259644s) [0] r=-1 lpr=107 pi=[66,107)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 185.794097900s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 107 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=107 pruub=12.069259644s) [0] r=-1 lpr=107 pi=[66,107)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 185.794097900s@ mbc={}] enter Started/Stray
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:52.344346+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 109 sent 107 num 2 unsent 2 sending 2
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:42:21.745346+0000 osd.2 (osd.2) 108 : cluster [DBG] 11.d deep-scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:42:21.759442+0000 osd.2 (osd.2) 109 : cluster [DBG] 11.d deep-scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 109) v1
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:42:21.745346+0000 osd.2 (osd.2) 108 : cluster [DBG] 11.d deep-scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:42:21.759442+0000 osd.2 (osd.2) 109 : cluster [DBG] 11.d deep-scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 64471040 unmapped: 401408 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 107 handle_osd_map epochs [107,108], i have 107, src has [1,108]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.428810120s of 10.481870651s, submitted: 49
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 108 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=107) [0] r=-1 lpr=107 pi=[66,107)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.000233 3 0.000057
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 108 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=107) [0] r=-1 lpr=107 pi=[66,107)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.000264 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 108 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=107) [0] r=-1 lpr=107 pi=[66,107)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 108 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=108) [0]/[2] r=0 lpr=108 pi=[66,108)/1 crt=44'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 108 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=108) [0]/[2] r=0 lpr=108 pi=[66,108)/1 crt=44'389 mlcod 0'0 remapped mbc={}] exit Reset 0.000050 1 0.000073
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 108 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=108) [0]/[2] r=0 lpr=108 pi=[66,108)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 108 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=108) [0]/[2] r=0 lpr=108 pi=[66,108)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 108 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=108) [0]/[2] r=0 lpr=108 pi=[66,108)/1 crt=44'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 108 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=108) [0]/[2] r=0 lpr=108 pi=[66,108)/1 crt=44'389 mlcod 0'0 remapped mbc={}] exit Start 0.000005 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 108 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=108) [0]/[2] r=0 lpr=108 pi=[66,108)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 108 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=108) [0]/[2] r=0 lpr=108 pi=[66,108)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 108 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=108) [0]/[2] r=0 lpr=108 pi=[66,108)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 108 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=108) [0]/[2] r=0 lpr=108 pi=[66,108)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.004486 2 0.000035
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 108 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=108) [0]/[2] r=0 lpr=108 pi=[66,108)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 108 handle_osd_map epochs [108,108], i have 108, src has [1,108]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 108 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=108) [0]/[2] async=[0] r=0 lpr=108 pi=[66,108)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000036 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 108 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=108) [0]/[2] async=[0] r=0 lpr=108 pi=[66,108)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 108 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=108) [0]/[2] async=[0] r=0 lpr=108 pi=[66,108)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 108 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=108) [0]/[2] async=[0] r=0 lpr=108 pi=[66,108)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:53.344465+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 64471040 unmapped: 401408 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 743468 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 108 heartbeat osd_stat(store_statfs(0x4fcadb000/0x0/0x4ffc00000, data 0xa20bf/0x141000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 108 handle_osd_map epochs [108,109], i have 108, src has [1,109]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 108 handle_osd_map epochs [109,109], i have 109, src has [1,109]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 109 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=108) [0]/[2] async=[0] r=0 lpr=108 pi=[66,108)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.999914 3 0.000087
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 109 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=108) [0]/[2] async=[0] r=0 lpr=108 pi=[66,108)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.004492 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 109 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=108) [0]/[2] async=[0] r=0 lpr=108 pi=[66,108)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 109 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=108/109 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=108) [0]/[2] async=[0] r=0 lpr=108 pi=[66,108)/1 crt=44'389 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 109 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=66) [2] r=0 lpr=66 crt=44'389 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 61.935878 129 0.000223
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 109 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=66) [2] r=0 lpr=66 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active 61.945668 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 109 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=66) [2] r=0 lpr=66 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary 62.946804 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 109 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=66) [2] r=0 lpr=66 crt=44'389 mlcod 0'0 active mbc={}] exit Started 62.946834 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 109 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=66) [2] r=0 lpr=66 crt=44'389 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 109 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=109 pruub=10.064750671s) [1] r=-1 lpr=109 pi=[66,109)/1 crt=44'389 mlcod 0'0 active pruub 185.794952393s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 109 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=109 pruub=10.064422607s) [1] r=-1 lpr=109 pi=[66,109)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 185.794952393s@ mbc={}] exit Reset 0.000365 1 0.000443
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 109 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=109 pruub=10.064422607s) [1] r=-1 lpr=109 pi=[66,109)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 185.794952393s@ mbc={}] enter Started
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 109 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=109 pruub=10.064422607s) [1] r=-1 lpr=109 pi=[66,109)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 185.794952393s@ mbc={}] enter Start
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 109 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=109 pruub=10.064422607s) [1] r=-1 lpr=109 pi=[66,109)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 185.794952393s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 109 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=109 pruub=10.064422607s) [1] r=-1 lpr=109 pi=[66,109)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 185.794952393s@ mbc={}] exit Start 0.000094 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 109 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=109 pruub=10.064422607s) [1] r=-1 lpr=109 pi=[66,109)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 185.794952393s@ mbc={}] enter Started/Stray
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 109 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=108/109 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=108) [0]/[2] async=[0] r=0 lpr=108 pi=[66,108)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 109 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=108/109 n=5 ec=45/34 lis/c=108/66 les/c/f=109/67/0 sis=108) [0]/[2] async=[0] r=0 lpr=108 pi=[66,108)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.002326 5 0.000163
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 109 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=108/109 n=5 ec=45/34 lis/c=108/66 les/c/f=109/67/0 sis=108) [0]/[2] async=[0] r=0 lpr=108 pi=[66,108)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 109 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=108/109 n=5 ec=45/34 lis/c=108/66 les/c/f=109/67/0 sis=108) [0]/[2] async=[0] r=0 lpr=108 pi=[66,108)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000044 1 0.000027
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 109 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=108/109 n=5 ec=45/34 lis/c=108/66 les/c/f=109/67/0 sis=108) [0]/[2] async=[0] r=0 lpr=108 pi=[66,108)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 109 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=108/109 n=5 ec=45/34 lis/c=108/66 les/c/f=109/67/0 sis=108) [0]/[2] async=[0] r=0 lpr=108 pi=[66,108)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000519 1 0.000058
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 109 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=108/109 n=5 ec=45/34 lis/c=108/66 les/c/f=109/67/0 sis=108) [0]/[2] async=[0] r=0 lpr=108 pi=[66,108)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 109 handle_osd_map epochs [109,109], i have 109, src has [1,109]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 109 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=108/109 n=5 ec=45/34 lis/c=108/66 les/c/f=109/67/0 sis=108) [0]/[2] async=[0] r=0 lpr=108 pi=[66,108)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.035387 2 0.000058
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 109 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=108/109 n=5 ec=45/34 lis/c=108/66 les/c/f=109/67/0 sis=108) [0]/[2] async=[0] r=0 lpr=108 pi=[66,108)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fcadb000/0x0/0x4ffc00000, data 0xa20bf/0x141000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:54.344564+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 376832 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 109 handle_osd_map epochs [110,110], i have 109, src has [1,110]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=108/109 n=5 ec=45/34 lis/c=108/66 les/c/f=109/67/0 sis=108) [0]/[2] async=[0] r=0 lpr=108 pi=[66,108)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.967674 1 0.000053
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 110 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=109) [1] r=-1 lpr=109 pi=[66,109)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.005146 3 0.000181
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 110 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=109) [1] r=-1 lpr=109 pi=[66,109)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.005291 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 110 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=109) [1] r=-1 lpr=109 pi=[66,109)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 110 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=110) [1]/[2] r=0 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 110 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=110) [1]/[2] r=0 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 0'0 remapped mbc={}] exit Reset 0.000055 1 0.000082
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 110 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=110) [1]/[2] r=0 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 110 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=110) [1]/[2] r=0 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 110 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=110) [1]/[2] r=0 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 110 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=110) [1]/[2] r=0 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 0'0 remapped mbc={}] exit Start 0.000005 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 110 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=110) [1]/[2] r=0 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 110 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=110) [1]/[2] r=0 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 110 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=110) [1]/[2] r=0 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 110 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=110) [1]/[2] r=0 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000173 2 0.000030
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 110 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=110) [1]/[2] r=0 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 110 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=110) [1]/[2] async=[1] r=0 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000021 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 110 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=110) [1]/[2] async=[1] r=0 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 110 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=110) [1]/[2] async=[1] r=0 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 110 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=110) [1]/[2] async=[1] r=0 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=108/109 n=5 ec=45/34 lis/c=108/66 les/c/f=109/67/0 sis=108) [0]/[2] async=[0] r=0 lpr=108 pi=[66,108)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active 1.006747 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 110 handle_osd_map epochs [110,110], i have 110, src has [1,110]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=108/109 n=5 ec=45/34 lis/c=108/66 les/c/f=109/67/0 sis=108) [0]/[2] async=[0] r=0 lpr=108 pi=[66,108)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary 2.011357 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=108/109 n=5 ec=45/34 lis/c=108/66 les/c/f=109/67/0 sis=108) [0]/[2] async=[0] r=0 lpr=108 pi=[66,108)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started 2.011406 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=108/109 n=5 ec=45/34 lis/c=108/66 les/c/f=109/67/0 sis=108) [0]/[2] async=[0] r=0 lpr=108 pi=[66,108)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] enter Reset
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=108/109 n=5 ec=45/34 lis/c=108/66 les/c/f=109/67/0 sis=110 pruub=14.995377541s) [0] async=[0] r=-1 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 44'389 active pruub 191.732009888s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=108/109 n=5 ec=45/34 lis/c=108/66 les/c/f=109/67/0 sis=110 pruub=14.995049477s) [0] r=-1 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 191.732009888s@ mbc={}] exit Reset 0.000366 1 0.001207
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=108/109 n=5 ec=45/34 lis/c=108/66 les/c/f=109/67/0 sis=110 pruub=14.995049477s) [0] r=-1 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 191.732009888s@ mbc={}] enter Started
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=108/109 n=5 ec=45/34 lis/c=108/66 les/c/f=109/67/0 sis=110 pruub=14.995049477s) [0] r=-1 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 191.732009888s@ mbc={}] enter Start
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=108/109 n=5 ec=45/34 lis/c=108/66 les/c/f=109/67/0 sis=110 pruub=14.995049477s) [0] r=-1 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 191.732009888s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=108/109 n=5 ec=45/34 lis/c=108/66 les/c/f=109/67/0 sis=110 pruub=14.995049477s) [0] r=-1 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 191.732009888s@ mbc={}] exit Start 0.000160 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=108/109 n=5 ec=45/34 lis/c=108/66 les/c/f=109/67/0 sis=110 pruub=14.995049477s) [0] r=-1 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 191.732009888s@ mbc={}] enter Started/Stray
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 110 handle_osd_map epochs [110,110], i have 110, src has [1,110]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:55.344673+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 64528384 unmapped: 344064 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 110 handle_osd_map epochs [110,111], i have 110, src has [1,111]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 110 handle_osd_map epochs [111,111], i have 111, src has [1,111]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 111 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=110) [1]/[2] async=[1] r=0 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.013132 3 0.000051
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 111 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=110) [1]/[2] async=[1] r=0 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.013367 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 111 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=110) [1]/[2] async=[1] r=0 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 111 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=110) [1]/[2] async=[1] r=0 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 111 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=108/109 n=5 ec=45/34 lis/c=108/66 les/c/f=109/67/0 sis=110) [0] r=-1 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.020433 7 0.000331
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 111 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=108/109 n=5 ec=45/34 lis/c=108/66 les/c/f=109/67/0 sis=110) [0] r=-1 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 111 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=108/109 n=5 ec=45/34 lis/c=108/66 les/c/f=109/67/0 sis=110) [0] r=-1 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 111 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=108/109 n=5 ec=45/34 lis/c=108/66 les/c/f=109/67/0 sis=110) [0] r=-1 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000062 1 0.000042
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 111 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=108/109 n=5 ec=45/34 lis/c=108/66 les/c/f=109/67/0 sis=110) [0] r=-1 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: not registered w/ OSD
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 111 pg[9.1e( v 44'389 (0'0,44'389] lb MIN local-lis/les=108/109 n=5 ec=45/34 lis/c=108/66 les/c/f=109/67/0 sis=110) [0] r=-1 lpr=110 DELETING pi=[66,110)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.038163 2 0.000157
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 111 pg[9.1e( v 44'389 (0'0,44'389] lb MIN local-lis/les=108/109 n=5 ec=45/34 lis/c=108/66 les/c/f=109/67/0 sis=110) [0] r=-1 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.038260 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 111 pg[9.1e( v 44'389 (0'0,44'389] lb MIN local-lis/les=108/109 n=5 ec=45/34 lis/c=108/66 les/c/f=109/67/0 sis=110) [0] r=-1 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.058916 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: not registered w/ OSD
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:56.344772+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 111 sent 109 num 2 unsent 2 sending 2
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:42:25.774414+0000 osd.2 (osd.2) 110 : cluster [DBG] 7.1 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:42:25.788534+0000 osd.2 (osd.2) 111 : cluster [DBG] 7.1 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 111 handle_osd_map epochs [111,111], i have 111, src has [1,111]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 111 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=110) [1]/[2] async=[1] r=0 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 111 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/66 les/c/f=111/67/0 sis=110) [1]/[2] async=[1] r=0 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.550421 5 0.000209
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 111 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/66 les/c/f=111/67/0 sis=110) [1]/[2] async=[1] r=0 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 111 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/66 les/c/f=111/67/0 sis=110) [1]/[2] async=[1] r=0 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000219 1 0.000134
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 111 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/66 les/c/f=111/67/0 sis=110) [1]/[2] async=[1] r=0 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 111 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/66 les/c/f=111/67/0 sis=110) [1]/[2] async=[1] r=0 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000537 1 0.000047
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 111 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/66 les/c/f=111/67/0 sis=110) [1]/[2] async=[1] r=0 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 111 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/66 les/c/f=111/67/0 sis=110) [1]/[2] async=[1] r=0 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.035396 2 0.000058
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 111 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/66 les/c/f=111/67/0 sis=110) [1]/[2] async=[1] r=0 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 64626688 unmapped: 1294336 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 111) v1
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:42:25.774414+0000 osd.2 (osd.2) 110 : cluster [DBG] 7.1 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:42:25.788534+0000 osd.2 (osd.2) 111 : cluster [DBG] 7.1 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:57.344935+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 111 handle_osd_map epochs [112,112], i have 111, src has [1,112]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 111 handle_osd_map epochs [112,112], i have 112, src has [1,112]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/66 les/c/f=111/67/0 sis=110) [1]/[2] async=[1] r=0 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.744969 1 0.000063
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/66 les/c/f=111/67/0 sis=110) [1]/[2] async=[1] r=0 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active 1.331731 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/66 les/c/f=111/67/0 sis=110) [1]/[2] async=[1] r=0 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary 2.345113 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/66 les/c/f=111/67/0 sis=110) [1]/[2] async=[1] r=0 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started 2.345132 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/66 les/c/f=111/67/0 sis=110) [1]/[2] async=[1] r=0 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] enter Reset
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/66 les/c/f=111/67/0 sis=112 pruub=15.218453407s) [1] async=[1] r=-1 lpr=112 pi=[66,112)/1 crt=44'389 mlcod 44'389 active pruub 194.299545288s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/66 les/c/f=111/67/0 sis=112 pruub=15.218302727s) [1] r=-1 lpr=112 pi=[66,112)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 194.299545288s@ mbc={}] exit Reset 0.000193 1 0.000244
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/66 les/c/f=111/67/0 sis=112 pruub=15.218302727s) [1] r=-1 lpr=112 pi=[66,112)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 194.299545288s@ mbc={}] enter Started
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/66 les/c/f=111/67/0 sis=112 pruub=15.218302727s) [1] r=-1 lpr=112 pi=[66,112)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 194.299545288s@ mbc={}] enter Start
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/66 les/c/f=111/67/0 sis=112 pruub=15.218302727s) [1] r=-1 lpr=112 pi=[66,112)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 194.299545288s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/66 les/c/f=111/67/0 sis=112 pruub=15.218302727s) [1] r=-1 lpr=112 pi=[66,112)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 194.299545288s@ mbc={}] exit Start 0.000011 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/66 les/c/f=111/67/0 sis=112 pruub=15.218302727s) [1] r=-1 lpr=112 pi=[66,112)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 194.299545288s@ mbc={}] enter Started/Stray
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 1228800 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 112 heartbeat osd_stat(store_statfs(0x4fcad1000/0x0/0x4ffc00000, data 0xa7133/0x14a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:58.345041+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 112 handle_osd_map epochs [112,113], i have 112, src has [1,113]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 113 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/66 les/c/f=111/67/0 sis=112) [1] r=-1 lpr=112 pi=[66,112)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.008575 7 0.000105
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 113 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/66 les/c/f=111/67/0 sis=112) [1] r=-1 lpr=112 pi=[66,112)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 113 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/66 les/c/f=111/67/0 sis=112) [1] r=-1 lpr=112 pi=[66,112)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 113 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/66 les/c/f=111/67/0 sis=112) [1] r=-1 lpr=112 pi=[66,112)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000038 1 0.000038
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 113 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/66 les/c/f=111/67/0 sis=112) [1] r=-1 lpr=112 pi=[66,112)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 113 pg[9.1f( v 44'389 (0'0,44'389] lb MIN local-lis/les=110/111 n=5 ec=45/34 lis/c=110/66 les/c/f=111/67/0 sis=112) [1] r=-1 lpr=112 DELETING pi=[66,112)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.039665 2 0.000129
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 113 pg[9.1f( v 44'389 (0'0,44'389] lb MIN local-lis/les=110/111 n=5 ec=45/34 lis/c=110/66 les/c/f=111/67/0 sis=112) [1] r=-1 lpr=112 pi=[66,112)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.039738 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 pg_epoch: 113 pg[9.1f( v 44'389 (0'0,44'389] lb MIN local-lis/les=110/111 n=5 ec=45/34 lis/c=110/66 les/c/f=111/67/0 sis=112) [1] r=-1 lpr=112 pi=[66,112)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.048356 0 0.000000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 1228800 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 741634 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:59.345150+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 64716800 unmapped: 1204224 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:00.345294+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 64733184 unmapped: 1187840 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:01.345412+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 1179648 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:02.345574+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 113 sent 111 num 2 unsent 2 sending 2
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:42:31.753888+0000 osd.2 (osd.2) 112 : cluster [DBG] 11.8 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:42:31.768005+0000 osd.2 (osd.2) 113 : cluster [DBG] 11.8 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 113) v1
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:42:31.753888+0000 osd.2 (osd.2) 112 : cluster [DBG] 11.8 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:42:31.768005+0000 osd.2 (osd.2) 113 : cluster [DBG] 11.8 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 1179648 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:03.345679+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 1179648 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 741230 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.755689621s of 10.794658661s, submitted: 51
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:04.345821+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 115 sent 113 num 2 unsent 2 sending 2
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:42:33.825967+0000 osd.2 (osd.2) 114 : cluster [DBG] 7.2 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:42:33.840163+0000 osd.2 (osd.2) 115 : cluster [DBG] 7.2 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 115) v1
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:42:33.825967+0000 osd.2 (osd.2) 114 : cluster [DBG] 7.2 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:42:33.840163+0000 osd.2 (osd.2) 115 : cluster [DBG] 7.2 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 64757760 unmapped: 1163264 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:05.346000+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 64757760 unmapped: 1163264 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 8.d scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 8.d scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:06.346136+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 117 sent 115 num 2 unsent 2 sending 2
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:42:35.869962+0000 osd.2 (osd.2) 116 : cluster [DBG] 8.d scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:42:35.884039+0000 osd.2 (osd.2) 117 : cluster [DBG] 8.d scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 117) v1
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:42:35.869962+0000 osd.2 (osd.2) 116 : cluster [DBG] 8.d scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:42:35.884039+0000 osd.2 (osd.2) 117 : cluster [DBG] 8.d scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 64765952 unmapped: 1155072 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:07.346270+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 64765952 unmapped: 1155072 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:08.346407+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 119 sent 117 num 2 unsent 2 sending 2
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:42:37.868386+0000 osd.2 (osd.2) 118 : cluster [DBG] 11.9 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:42:37.886082+0000 osd.2 (osd.2) 119 : cluster [DBG] 11.9 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 119) v1
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:42:37.868386+0000 osd.2 (osd.2) 118 : cluster [DBG] 11.9 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:42:37.886082+0000 osd.2 (osd.2) 119 : cluster [DBG] 11.9 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 64774144 unmapped: 1146880 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 744672 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:09.346558+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 121 sent 119 num 2 unsent 2 sending 2
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:42:38.831000+0000 osd.2 (osd.2) 120 : cluster [DBG] 7.5 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:42:38.845086+0000 osd.2 (osd.2) 121 : cluster [DBG] 7.5 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 121) v1
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:42:38.831000+0000 osd.2 (osd.2) 120 : cluster [DBG] 7.5 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:42:38.845086+0000 osd.2 (osd.2) 121 : cluster [DBG] 7.5 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 64790528 unmapped: 1130496 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:10.346747+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 123 sent 121 num 2 unsent 2 sending 2
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:42:39.862288+0000 osd.2 (osd.2) 122 : cluster [DBG] 7.8 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:42:39.876144+0000 osd.2 (osd.2) 123 : cluster [DBG] 7.8 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 123) v1
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:42:39.862288+0000 osd.2 (osd.2) 122 : cluster [DBG] 7.8 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:42:39.876144+0000 osd.2 (osd.2) 123 : cluster [DBG] 7.8 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 64790528 unmapped: 1130496 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:11.346937+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 1122304 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 8.4 deep-scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 8.4 deep-scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:12.347100+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 125 sent 123 num 2 unsent 2 sending 2
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:42:41.846796+0000 osd.2 (osd.2) 124 : cluster [DBG] 8.4 deep-scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:42:41.860926+0000 osd.2 (osd.2) 125 : cluster [DBG] 8.4 deep-scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 125) v1
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:42:41.846796+0000 osd.2 (osd.2) 124 : cluster [DBG] 8.4 deep-scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:42:41.860926+0000 osd.2 (osd.2) 125 : cluster [DBG] 8.4 deep-scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 1114112 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:13.347224+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 1105920 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 748113 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:14.347326+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 1105920 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:15.347425+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 1105920 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:16.347533+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 1097728 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:17.347647+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 1097728 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:18.347749+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 1081344 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 748113 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:19.347852+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 1081344 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:20.347960+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 1073152 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 7.a scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.991127014s of 17.006439209s, submitted: 12
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 7.a scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:21.348090+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 127 sent 125 num 2 unsent 2 sending 2
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:42:50.832508+0000 osd.2 (osd.2) 126 : cluster [DBG] 7.a scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:42:50.846552+0000 osd.2 (osd.2) 127 : cluster [DBG] 7.a scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 127) v1
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:42:50.832508+0000 osd.2 (osd.2) 126 : cluster [DBG] 7.a scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:42:50.846552+0000 osd.2 (osd.2) 127 : cluster [DBG] 7.a scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 1064960 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:22.348416+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 1064960 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:23.348526+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1056768 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 749260 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:24.348708+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1056768 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:25.348809+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1056768 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:26.348969+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1048576 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:27.349114+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 129 sent 127 num 2 unsent 2 sending 2
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:42:56.838910+0000 osd.2 (osd.2) 128 : cluster [DBG] 7.15 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:42:56.852509+0000 osd.2 (osd.2) 129 : cluster [DBG] 7.15 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 129) v1
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:42:56.838910+0000 osd.2 (osd.2) 128 : cluster [DBG] 7.15 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:42:56.852509+0000 osd.2 (osd.2) 129 : cluster [DBG] 7.15 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1040384 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:28.349265+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 131 sent 129 num 2 unsent 2 sending 2
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:42:57.886061+0000 osd.2 (osd.2) 130 : cluster [DBG] 11.1b scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:42:57.900242+0000 osd.2 (osd.2) 131 : cluster [DBG] 11.1b scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 131) v1
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:42:57.886061+0000 osd.2 (osd.2) 130 : cluster [DBG] 11.1b scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:42:57.900242+0000 osd.2 (osd.2) 131 : cluster [DBG] 11.1b scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 64913408 unmapped: 1007616 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 751557 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:29.349410+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 64913408 unmapped: 1007616 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:30.349550+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 64921600 unmapped: 999424 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:31.349679+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 64921600 unmapped: 999424 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:32.349803+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 64921600 unmapped: 999424 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.948253632s of 11.955660820s, submitted: 6
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:33.349907+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 133 sent 131 num 2 unsent 2 sending 2
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:43:02.788155+0000 osd.2 (osd.2) 132 : cluster [DBG] 7.11 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:43:02.802293+0000 osd.2 (osd.2) 133 : cluster [DBG] 7.11 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 133) v1
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:43:02.788155+0000 osd.2 (osd.2) 132 : cluster [DBG] 7.11 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:43:02.802293+0000 osd.2 (osd.2) 133 : cluster [DBG] 7.11 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 64937984 unmapped: 983040 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 752705 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:34.350039+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 64937984 unmapped: 983040 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:35.350208+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 64946176 unmapped: 974848 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:36.350327+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 64946176 unmapped: 974848 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:37.350460+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 135 sent 133 num 2 unsent 2 sending 2
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:43:06.833500+0000 osd.2 (osd.2) 134 : cluster [DBG] 11.1e scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:43:06.847732+0000 osd.2 (osd.2) 135 : cluster [DBG] 11.1e scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 135) v1
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:43:06.833500+0000 osd.2 (osd.2) 134 : cluster [DBG] 11.1e scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:43:06.847732+0000 osd.2 (osd.2) 135 : cluster [DBG] 11.1e scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 64946176 unmapped: 974848 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:38.350602+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 64954368 unmapped: 966656 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 753854 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:39.350740+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 137 sent 135 num 2 unsent 2 sending 2
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:43:08.828724+0000 osd.2 (osd.2) 136 : cluster [DBG] 11.11 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:43:08.842856+0000 osd.2 (osd.2) 137 : cluster [DBG] 11.11 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 137) v1
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:43:08.828724+0000 osd.2 (osd.2) 136 : cluster [DBG] 11.11 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:43:08.842856+0000 osd.2 (osd.2) 137 : cluster [DBG] 11.11 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 64954368 unmapped: 966656 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:40.350916+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 139 sent 137 num 2 unsent 2 sending 2
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:43:09.844250+0000 osd.2 (osd.2) 138 : cluster [DBG] 11.1c scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:43:09.858386+0000 osd.2 (osd.2) 139 : cluster [DBG] 11.1c scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 139) v1
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:43:09.844250+0000 osd.2 (osd.2) 138 : cluster [DBG] 11.1c scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:43:09.858386+0000 osd.2 (osd.2) 139 : cluster [DBG] 11.1c scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 64954368 unmapped: 966656 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:41.351060+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 64962560 unmapped: 958464 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:42.351167+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 64970752 unmapped: 950272 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.003293037s of 10.012474060s, submitted: 8
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:43.351290+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 141 sent 139 num 2 unsent 2 sending 2
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:43:12.800603+0000 osd.2 (osd.2) 140 : cluster [DBG] 8.12 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:43:12.814730+0000 osd.2 (osd.2) 141 : cluster [DBG] 8.12 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 141) v1
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:43:12.800603+0000 osd.2 (osd.2) 140 : cluster [DBG] 8.12 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:43:12.814730+0000 osd.2 (osd.2) 141 : cluster [DBG] 8.12 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 64970752 unmapped: 950272 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 757300 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:44.351443+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 64978944 unmapped: 942080 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:45.351589+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 64978944 unmapped: 942080 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:46.351709+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 143 sent 141 num 2 unsent 2 sending 2
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:43:15.760443+0000 osd.2 (osd.2) 142 : cluster [DBG] 7.1c scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:43:15.774577+0000 osd.2 (osd.2) 143 : cluster [DBG] 7.1c scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 143) v1
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:43:15.760443+0000 osd.2 (osd.2) 142 : cluster [DBG] 7.1c scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:43:15.774577+0000 osd.2 (osd.2) 143 : cluster [DBG] 7.1c scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 64987136 unmapped: 933888 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:47.351901+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 64987136 unmapped: 933888 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:48.352019+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65011712 unmapped: 909312 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 758448 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:49.352166+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65011712 unmapped: 909312 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:50.352301+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 145 sent 143 num 2 unsent 2 sending 2
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:43:19.709567+0000 osd.2 (osd.2) 144 : cluster [DBG] 11.12 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:43:19.723668+0000 osd.2 (osd.2) 145 : cluster [DBG] 11.12 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 145) v1
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:43:19.709567+0000 osd.2 (osd.2) 144 : cluster [DBG] 11.12 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:43:19.723668+0000 osd.2 (osd.2) 145 : cluster [DBG] 11.12 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65011712 unmapped: 909312 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:51.352435+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65019904 unmapped: 901120 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:52.352553+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 7.e deep-scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 7.e deep-scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65019904 unmapped: 901120 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:53.352690+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 147 sent 145 num 2 unsent 2 sending 2
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:43:22.748167+0000 osd.2 (osd.2) 146 : cluster [DBG] 7.e deep-scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:43:22.762299+0000 osd.2 (osd.2) 147 : cluster [DBG] 7.e deep-scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 147) v1
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:43:22.748167+0000 osd.2 (osd.2) 146 : cluster [DBG] 7.e deep-scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:43:22.762299+0000 osd.2 (osd.2) 147 : cluster [DBG] 7.e deep-scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65044480 unmapped: 876544 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 760744 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:54.352843+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65044480 unmapped: 876544 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:55.352958+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65044480 unmapped: 876544 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:56.353061+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 8.11 deep-scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.812911987s of 13.821316719s, submitted: 8
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 8.11 deep-scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65052672 unmapped: 868352 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:57.353163+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 149 sent 147 num 2 unsent 2 sending 2
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:43:26.621968+0000 osd.2 (osd.2) 148 : cluster [DBG] 8.11 deep-scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:43:26.636075+0000 osd.2 (osd.2) 149 : cluster [DBG] 8.11 deep-scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 149) v1
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:43:26.621968+0000 osd.2 (osd.2) 148 : cluster [DBG] 8.11 deep-scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:43:26.636075+0000 osd.2 (osd.2) 149 : cluster [DBG] 8.11 deep-scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65052672 unmapped: 868352 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:58.353299+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 11.b scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 11.b scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65069056 unmapped: 851968 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 763040 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:59.353617+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 151 sent 149 num 2 unsent 2 sending 2
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:43:28.619999+0000 osd.2 (osd.2) 150 : cluster [DBG] 11.b scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:43:28.633925+0000 osd.2 (osd.2) 151 : cluster [DBG] 11.b scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 151) v1
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:43:28.619999+0000 osd.2 (osd.2) 150 : cluster [DBG] 11.b scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:43:28.633925+0000 osd.2 (osd.2) 151 : cluster [DBG] 11.b scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65069056 unmapped: 851968 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:00.353888+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65077248 unmapped: 843776 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:01.354031+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65077248 unmapped: 843776 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:02.354530+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65085440 unmapped: 835584 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:03.354655+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65085440 unmapped: 835584 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 763040 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:04.354765+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65085440 unmapped: 835584 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:05.354892+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65093632 unmapped: 827392 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:06.355011+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65093632 unmapped: 827392 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:07.355142+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 153 sent 151 num 2 unsent 2 sending 2
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:43:36.604748+0000 osd.2 (osd.2) 152 : cluster [DBG] 11.18 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:43:36.618787+0000 osd.2 (osd.2) 153 : cluster [DBG] 11.18 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.992200851s of 10.998500824s, submitted: 6
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 153) v1
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:43:36.604748+0000 osd.2 (osd.2) 152 : cluster [DBG] 11.18 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:43:36.618787+0000 osd.2 (osd.2) 153 : cluster [DBG] 11.18 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65093632 unmapped: 827392 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:08.355276+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 155 sent 153 num 2 unsent 2 sending 2
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:43:37.620505+0000 osd.2 (osd.2) 154 : cluster [DBG] 11.2 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:43:37.634702+0000 osd.2 (osd.2) 155 : cluster [DBG] 11.2 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 155) v1
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:43:37.620505+0000 osd.2 (osd.2) 154 : cluster [DBG] 11.2 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:43:37.634702+0000 osd.2 (osd.2) 155 : cluster [DBG] 11.2 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65126400 unmapped: 794624 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766485 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:09.355407+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 157 sent 155 num 2 unsent 2 sending 2
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:43:38.651767+0000 osd.2 (osd.2) 156 : cluster [DBG] 8.1b scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:43:38.665910+0000 osd.2 (osd.2) 157 : cluster [DBG] 8.1b scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 157) v1
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:43:38.651767+0000 osd.2 (osd.2) 156 : cluster [DBG] 8.1b scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:43:38.665910+0000 osd.2 (osd.2) 157 : cluster [DBG] 8.1b scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65126400 unmapped: 794624 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:10.355557+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 786432 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:11.355718+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65126400 unmapped: 794624 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:12.355879+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 786432 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:13.356025+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 11.1a deep-scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 11.1a deep-scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 786432 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 767634 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:14.356143+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 159 sent 157 num 2 unsent 2 sending 2
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:43:43.640560+0000 osd.2 (osd.2) 158 : cluster [DBG] 11.1a deep-scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:43:43.654675+0000 osd.2 (osd.2) 159 : cluster [DBG] 11.1a deep-scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 159) v1
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:43:43.640560+0000 osd.2 (osd.2) 158 : cluster [DBG] 11.1a deep-scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:43:43.654675+0000 osd.2 (osd.2) 159 : cluster [DBG] 11.1a deep-scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 786432 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:15.356309+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 161 sent 159 num 2 unsent 2 sending 2
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:43:44.610497+0000 osd.2 (osd.2) 160 : cluster [DBG] 11.1f scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:43:44.624660+0000 osd.2 (osd.2) 161 : cluster [DBG] 11.1f scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 161) v1
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:43:44.610497+0000 osd.2 (osd.2) 160 : cluster [DBG] 11.1f scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:43:44.624660+0000 osd.2 (osd.2) 161 : cluster [DBG] 11.1f scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 778240 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:16.356451+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 778240 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:17.356553+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 778240 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:18.356685+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 770048 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768783 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:19.356785+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.046729088s of 12.055793762s, submitted: 8
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 770048 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:20.356892+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 163 sent 161 num 2 unsent 2 sending 2
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:43:49.676268+0000 osd.2 (osd.2) 162 : cluster [DBG] 8.1c scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:43:49.690417+0000 osd.2 (osd.2) 163 : cluster [DBG] 8.1c scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 163) v1
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:43:49.676268+0000 osd.2 (osd.2) 162 : cluster [DBG] 8.1c scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:43:49.690417+0000 osd.2 (osd.2) 163 : cluster [DBG] 8.1c scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 753664 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:21.357018+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 165 sent 163 num 2 unsent 2 sending 2
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:43:50.696129+0000 osd.2 (osd.2) 164 : cluster [DBG] 6.8 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:43:50.710202+0000 osd.2 (osd.2) 165 : cluster [DBG] 6.8 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 165) v1
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:43:50.696129+0000 osd.2 (osd.2) 164 : cluster [DBG] 6.8 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:43:50.710202+0000 osd.2 (osd.2) 165 : cluster [DBG] 6.8 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 753664 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:22.357242+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 753664 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:23.357405+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 9.e scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 9.e scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65175552 unmapped: 745472 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 772225 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:24.357508+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 167 sent 165 num 2 unsent 2 sending 2
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:43:53.768327+0000 osd.2 (osd.2) 166 : cluster [DBG] 9.e scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:43:53.799993+0000 osd.2 (osd.2) 167 : cluster [DBG] 9.e scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 167) v1
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:43:53.768327+0000 osd.2 (osd.2) 166 : cluster [DBG] 9.e scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:43:53.799993+0000 osd.2 (osd.2) 167 : cluster [DBG] 9.e scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 737280 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:25.357644+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 737280 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:26.357745+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65191936 unmapped: 729088 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:27.357835+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65191936 unmapped: 729088 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:28.357926+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 688128 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 773372 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:29.358017+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 169 sent 167 num 2 unsent 2 sending 2
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:43:58.714695+0000 osd.2 (osd.2) 168 : cluster [DBG] 9.6 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:43:58.750018+0000 osd.2 (osd.2) 169 : cluster [DBG] 9.6 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.023792267s of 10.033007622s, submitted: 8
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 169) v1
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:43:58.714695+0000 osd.2 (osd.2) 168 : cluster [DBG] 9.6 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:43:58.750018+0000 osd.2 (osd.2) 169 : cluster [DBG] 9.6 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 679936 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:30.358181+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 171 sent 169 num 2 unsent 2 sending 2
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:43:59.709367+0000 osd.2 (osd.2) 170 : cluster [DBG] 9.7 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:43:59.741111+0000 osd.2 (osd.2) 171 : cluster [DBG] 9.7 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 171) v1
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:43:59.709367+0000 osd.2 (osd.2) 170 : cluster [DBG] 9.7 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:43:59.741111+0000 osd.2 (osd.2) 171 : cluster [DBG] 9.7 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 679936 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:31.358327+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 671744 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:32.358470+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 9.f scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 9.f scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 671744 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:33.358571+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 173 sent 171 num 2 unsent 2 sending 2
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:44:02.684319+0000 osd.2 (osd.2) 172 : cluster [DBG] 9.f scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:44:02.722829+0000 osd.2 (osd.2) 173 : cluster [DBG] 9.f scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 173) v1
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:44:02.684319+0000 osd.2 (osd.2) 172 : cluster [DBG] 9.f scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:44:02.722829+0000 osd.2 (osd.2) 173 : cluster [DBG] 9.f scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 655360 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 775666 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:34.358682+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 638976 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:35.358778+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 638976 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:36.358910+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 175 sent 173 num 2 unsent 2 sending 2
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:44:05.687160+0000 osd.2 (osd.2) 174 : cluster [DBG] 9.17 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:44:05.711807+0000 osd.2 (osd.2) 175 : cluster [DBG] 9.17 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 175) v1
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:44:05.687160+0000 osd.2 (osd.2) 174 : cluster [DBG] 9.17 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:44:05.711807+0000 osd.2 (osd.2) 175 : cluster [DBG] 9.17 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 630784 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:37.359053+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 630784 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:38.359158+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 622592 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 776814 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:39.359253+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65314816 unmapped: 606208 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:40.359352+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 9.8 deep-scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.970738411s of 10.977127075s, submitted: 6
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 9.8 deep-scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65314816 unmapped: 606208 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:41.359456+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 177 sent 175 num 2 unsent 2 sending 2
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:44:10.686466+0000 osd.2 (osd.2) 176 : cluster [DBG] 9.8 deep-scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:44:10.725308+0000 osd.2 (osd.2) 177 : cluster [DBG] 9.8 deep-scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 177) v1
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:44:10.686466+0000 osd.2 (osd.2) 176 : cluster [DBG] 9.8 deep-scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:44:10.725308+0000 osd.2 (osd.2) 177 : cluster [DBG] 9.8 deep-scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65314816 unmapped: 606208 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:42.359705+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 598016 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:43.359808+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 598016 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 777961 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:44.359967+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65331200 unmapped: 589824 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:45.360129+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65331200 unmapped: 589824 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:46.360268+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65339392 unmapped: 581632 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:47.360432+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65339392 unmapped: 581632 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:48.360588+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 179 sent 177 num 2 unsent 2 sending 2
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:44:17.677838+0000 osd.2 (osd.2) 178 : cluster [DBG] 9.18 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:44:17.706084+0000 osd.2 (osd.2) 179 : cluster [DBG] 9.18 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 179) v1
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:44:17.677838+0000 osd.2 (osd.2) 178 : cluster [DBG] 9.18 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:44:17.706084+0000 osd.2 (osd.2) 179 : cluster [DBG] 9.18 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65363968 unmapped: 557056 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 779109 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:49.360781+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 9.c scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 9.c scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 548864 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:50.360942+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 181 sent 179 num 2 unsent 2 sending 2
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:44:19.656615+0000 osd.2 (osd.2) 180 : cluster [DBG] 9.c scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:44:19.688566+0000 osd.2 (osd.2) 181 : cluster [DBG] 9.c scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 181) v1
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:44:19.656615+0000 osd.2 (osd.2) 180 : cluster [DBG] 9.c scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:44:19.688566+0000 osd.2 (osd.2) 181 : cluster [DBG] 9.c scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 548864 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:51.361170+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 6.f scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.976912498s of 10.983918190s, submitted: 6
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 6.f scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65388544 unmapped: 532480 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:52.361364+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 183 sent 181 num 2 unsent 2 sending 2
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:44:21.670364+0000 osd.2 (osd.2) 182 : cluster [DBG] 6.f scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:44:21.695129+0000 osd.2 (osd.2) 183 : cluster [DBG] 6.f scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 183) v1
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:44:21.670364+0000 osd.2 (osd.2) 182 : cluster [DBG] 6.f scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:44:21.695129+0000 osd.2 (osd.2) 183 : cluster [DBG] 6.f scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65388544 unmapped: 532480 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:53.361505+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 185 sent 183 num 2 unsent 2 sending 2
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:44:22.697899+0000 osd.2 (osd.2) 184 : cluster [DBG] 9.13 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:44:22.729589+0000 osd.2 (osd.2) 185 : cluster [DBG] 9.13 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 185) v1
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:44:22.697899+0000 osd.2 (osd.2) 184 : cluster [DBG] 9.13 scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:44:22.729589+0000 osd.2 (osd.2) 185 : cluster [DBG] 9.13 scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65388544 unmapped: 532480 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 782551 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:54.361697+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65396736 unmapped: 524288 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:55.361857+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 9.19 deep-scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_channel(cluster) log [DBG] : 9.19 deep-scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65396736 unmapped: 524288 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:56.362028+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  log_queue is 2 last_log 187 sent 185 num 2 unsent 2 sending 2
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:44:25.661712+0000 osd.2 (osd.2) 186 : cluster [DBG] 9.19 deep-scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  will send 2025-11-26T11:44:25.700671+0000 osd.2 (osd.2) 187 : cluster [DBG] 9.19 deep-scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client handle_log_ack log(last 187) v1
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:44:25.661712+0000 osd.2 (osd.2) 186 : cluster [DBG] 9.19 deep-scrub starts
Nov 26 11:59:02 compute-0 ceph-osd[90047]: log_client  logged 2025-11-26T11:44:25.700671+0000 osd.2 (osd.2) 187 : cluster [DBG] 9.19 deep-scrub ok
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65404928 unmapped: 516096 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:57.362193+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65413120 unmapped: 507904 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:58.362321+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65413120 unmapped: 507904 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:59.362479+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65413120 unmapped: 507904 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:00.362620+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65421312 unmapped: 499712 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:01.362758+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65421312 unmapped: 499712 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:02.362903+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65429504 unmapped: 491520 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:03.362998+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65429504 unmapped: 491520 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:04.363105+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65437696 unmapped: 483328 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:05.363215+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65437696 unmapped: 483328 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:06.363315+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65437696 unmapped: 483328 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:07.363447+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65445888 unmapped: 475136 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:08.363553+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65462272 unmapped: 458752 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:09.363674+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65470464 unmapped: 450560 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:10.363773+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65470464 unmapped: 450560 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:11.363915+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65470464 unmapped: 450560 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:12.364072+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65478656 unmapped: 442368 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:13.364188+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65478656 unmapped: 442368 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:14.364294+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65486848 unmapped: 434176 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:15.364442+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65486848 unmapped: 434176 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:16.364563+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65495040 unmapped: 425984 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:17.364694+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65495040 unmapped: 425984 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:18.364801+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65503232 unmapped: 417792 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:19.364931+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65511424 unmapped: 409600 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:20.365076+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65511424 unmapped: 409600 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:21.365213+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65511424 unmapped: 409600 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:22.365343+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65519616 unmapped: 401408 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:23.365452+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65527808 unmapped: 393216 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:24.365549+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65536000 unmapped: 385024 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:25.365655+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65536000 unmapped: 385024 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:26.365803+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65536000 unmapped: 385024 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:27.365906+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65536000 unmapped: 385024 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:28.366003+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65544192 unmapped: 376832 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:29.366115+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65544192 unmapped: 376832 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:30.366215+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65552384 unmapped: 368640 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:31.366336+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65552384 unmapped: 368640 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:32.366453+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65552384 unmapped: 368640 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:33.366554+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65560576 unmapped: 360448 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:34.366670+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65560576 unmapped: 360448 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:35.366785+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65560576 unmapped: 360448 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:36.366907+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65568768 unmapped: 352256 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:37.367030+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65568768 unmapped: 352256 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:38.367132+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65568768 unmapped: 352256 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:39.367240+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65576960 unmapped: 344064 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:40.367349+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65576960 unmapped: 344064 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:41.367480+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65585152 unmapped: 335872 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:42.367649+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65585152 unmapped: 335872 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:43.367798+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65593344 unmapped: 327680 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:44.367945+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65593344 unmapped: 327680 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:45.368073+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65593344 unmapped: 327680 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:46.368189+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65601536 unmapped: 319488 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:47.368323+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65601536 unmapped: 319488 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:48.368423+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65609728 unmapped: 311296 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:49.368588+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65609728 unmapped: 311296 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:50.368731+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65609728 unmapped: 311296 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:51.368872+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65617920 unmapped: 303104 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:52.369003+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65609728 unmapped: 311296 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:53.369145+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65609728 unmapped: 311296 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:54.369242+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65617920 unmapped: 303104 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:55.369343+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65617920 unmapped: 303104 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:56.369462+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65626112 unmapped: 294912 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:57.369566+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65626112 unmapped: 294912 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:58.369675+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65626112 unmapped: 294912 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:59.369769+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65634304 unmapped: 286720 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:00.369901+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65634304 unmapped: 286720 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:01.370029+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65634304 unmapped: 286720 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:02.370872+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65642496 unmapped: 278528 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:03.371012+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65642496 unmapped: 278528 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:04.371133+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65650688 unmapped: 270336 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:05.371256+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65650688 unmapped: 270336 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:06.371400+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65650688 unmapped: 270336 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:07.371497+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65658880 unmapped: 262144 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:08.371619+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65658880 unmapped: 262144 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:09.371734+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65667072 unmapped: 253952 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:10.371843+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65667072 unmapped: 253952 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:11.371949+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65675264 unmapped: 245760 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:12.372103+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65683456 unmapped: 237568 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:13.372221+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65683456 unmapped: 237568 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:14.372345+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65691648 unmapped: 229376 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:15.372445+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65691648 unmapped: 229376 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:16.372544+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65691648 unmapped: 229376 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:17.372675+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65699840 unmapped: 221184 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:18.372775+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65699840 unmapped: 221184 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:19.372877+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65708032 unmapped: 212992 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:20.372972+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65708032 unmapped: 212992 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:21.373127+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65708032 unmapped: 212992 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:22.373262+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65724416 unmapped: 196608 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:23.373364+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65732608 unmapped: 188416 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:24.373461+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65732608 unmapped: 188416 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:25.373576+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65740800 unmapped: 180224 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:26.373693+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65740800 unmapped: 180224 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:27.373825+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65748992 unmapped: 172032 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:28.373918+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65748992 unmapped: 172032 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:29.374011+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65748992 unmapped: 172032 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:30.374123+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65757184 unmapped: 163840 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:31.374215+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65757184 unmapped: 163840 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:32.374335+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65765376 unmapped: 155648 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:33.374448+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65773568 unmapped: 147456 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:34.374550+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65765376 unmapped: 155648 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:35.374670+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65773568 unmapped: 147456 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:36.374776+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65773568 unmapped: 147456 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:37.374851+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65781760 unmapped: 139264 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:38.374950+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65781760 unmapped: 139264 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:39.375045+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65781760 unmapped: 139264 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:40.375154+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65789952 unmapped: 131072 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:41.375260+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65789952 unmapped: 131072 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:42.375402+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65789952 unmapped: 131072 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:43.375517+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65798144 unmapped: 122880 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:44.375624+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65798144 unmapped: 122880 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:45.375748+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65806336 unmapped: 114688 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:46.375851+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65806336 unmapped: 114688 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:47.375971+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65806336 unmapped: 114688 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:48.376098+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65814528 unmapped: 106496 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:49.376233+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65814528 unmapped: 106496 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:50.376336+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65814528 unmapped: 106496 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:51.376440+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65822720 unmapped: 98304 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:52.376553+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65822720 unmapped: 98304 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:53.376684+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65822720 unmapped: 98304 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:54.376792+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65822720 unmapped: 98304 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:55.376934+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65830912 unmapped: 90112 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:56.377064+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65830912 unmapped: 90112 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:57.377177+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65847296 unmapped: 73728 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:58.377301+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65847296 unmapped: 73728 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:59.377401+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65855488 unmapped: 65536 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:00.377496+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65855488 unmapped: 65536 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:01.377632+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65855488 unmapped: 65536 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:02.377776+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65863680 unmapped: 57344 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:03.377876+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65863680 unmapped: 57344 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:04.377979+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 49152 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:05.378120+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 49152 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:06.378229+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65880064 unmapped: 40960 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:07.378337+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65880064 unmapped: 40960 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:08.378434+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65888256 unmapped: 32768 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:09.378539+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65896448 unmapped: 24576 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:10.378669+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65896448 unmapped: 24576 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:11.378766+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65904640 unmapped: 16384 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:12.378845+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65904640 unmapped: 16384 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:13.378946+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65904640 unmapped: 16384 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:14.379042+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65904640 unmapped: 16384 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:15.379144+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65904640 unmapped: 16384 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:16.379261+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65912832 unmapped: 8192 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:17.379363+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65912832 unmapped: 8192 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:18.379498+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65912832 unmapped: 8192 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:19.379589+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65921024 unmapped: 0 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:20.379687+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65921024 unmapped: 0 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:21.379801+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65921024 unmapped: 0 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:22.379925+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65937408 unmapped: 1032192 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:23.380058+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65937408 unmapped: 1032192 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:24.380164+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65945600 unmapped: 1024000 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:25.380268+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65945600 unmapped: 1024000 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:26.380376+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65945600 unmapped: 1024000 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:27.380473+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65953792 unmapped: 1015808 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:28.380570+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65953792 unmapped: 1015808 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:29.380680+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65953792 unmapped: 1015808 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:30.380812+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65961984 unmapped: 1007616 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:31.380926+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65961984 unmapped: 1007616 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:32.381125+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65970176 unmapped: 999424 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:33.381480+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65970176 unmapped: 999424 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:34.381606+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65978368 unmapped: 991232 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:35.381712+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65978368 unmapped: 991232 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:36.381818+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65978368 unmapped: 991232 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:37.381950+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65986560 unmapped: 983040 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:38.382079+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65986560 unmapped: 983040 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:39.382262+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65994752 unmapped: 974848 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:40.382423+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65994752 unmapped: 974848 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:41.382573+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 65994752 unmapped: 974848 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:42.382701+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 966656 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:43.382808+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 966656 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:44.382915+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 958464 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:45.383111+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 958464 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:46.383261+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 958464 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:47.383401+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66019328 unmapped: 950272 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:48.383500+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66019328 unmapped: 950272 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:49.383589+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66019328 unmapped: 950272 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:50.383662+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 942080 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:51.383748+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 942080 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:52.383869+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 933888 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:53.383978+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66043904 unmapped: 925696 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:54.384065+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66052096 unmapped: 917504 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:55.384158+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66052096 unmapped: 917504 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:56.384258+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66052096 unmapped: 917504 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:57.384369+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66052096 unmapped: 917504 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:58.384492+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66052096 unmapped: 917504 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:59.384587+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66043904 unmapped: 925696 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:00.384691+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66052096 unmapped: 917504 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:01.384780+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66052096 unmapped: 917504 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:02.384942+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66060288 unmapped: 909312 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:03.385067+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66060288 unmapped: 909312 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:04.385167+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66060288 unmapped: 909312 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:05.385274+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66068480 unmapped: 901120 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:06.385392+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66068480 unmapped: 901120 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:07.385495+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66076672 unmapped: 892928 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:08.385604+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66076672 unmapped: 892928 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:09.385721+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66076672 unmapped: 892928 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:10.385825+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66084864 unmapped: 884736 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:11.385946+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66084864 unmapped: 884736 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:12.386092+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66084864 unmapped: 884736 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:13.386206+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66093056 unmapped: 876544 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:14.386304+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66093056 unmapped: 876544 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:15.386388+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66093056 unmapped: 876544 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:16.386513+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66101248 unmapped: 868352 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:17.386608+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66101248 unmapped: 868352 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:18.386678+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66109440 unmapped: 860160 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:19.386774+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66109440 unmapped: 860160 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:20.386872+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66117632 unmapped: 851968 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:21.386969+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66117632 unmapped: 851968 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:22.387245+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66117632 unmapped: 851968 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:23.387348+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66125824 unmapped: 843776 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:24.387459+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66125824 unmapped: 843776 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:25.388014+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66125824 unmapped: 843776 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:26.388179+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66134016 unmapped: 835584 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:27.388311+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66142208 unmapped: 827392 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:28.388413+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66142208 unmapped: 827392 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:29.388537+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66150400 unmapped: 819200 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:30.388656+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66150400 unmapped: 819200 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:31.388752+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66158592 unmapped: 811008 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:32.388903+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66158592 unmapped: 811008 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:33.388990+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66158592 unmapped: 811008 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:34.389078+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66166784 unmapped: 802816 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:35.389184+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66166784 unmapped: 802816 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:36.389278+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66174976 unmapped: 794624 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:37.389363+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:38.389456+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66174976 unmapped: 794624 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:39.389548+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66183168 unmapped: 786432 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:40.389699+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66191360 unmapped: 778240 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:41.389809+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66191360 unmapped: 778240 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:42.389928+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66191360 unmapped: 778240 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:43.390027+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66199552 unmapped: 770048 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:44.390127+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66191360 unmapped: 778240 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:45.390228+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66199552 unmapped: 770048 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:46.390324+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66199552 unmapped: 770048 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:47.390421+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66207744 unmapped: 761856 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:48.390524+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66207744 unmapped: 761856 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:49.391006+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66207744 unmapped: 761856 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:50.391101+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66215936 unmapped: 753664 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:51.391192+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66215936 unmapped: 753664 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:52.391298+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66215936 unmapped: 753664 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:53.391385+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66224128 unmapped: 745472 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:54.391478+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66224128 unmapped: 745472 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:55.391582+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66224128 unmapped: 745472 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:56.391675+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66232320 unmapped: 737280 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:57.391764+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66232320 unmapped: 737280 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:58.391856+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66240512 unmapped: 729088 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:59.392001+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66240512 unmapped: 729088 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:00.392140+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66248704 unmapped: 720896 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:01.392307+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66248704 unmapped: 720896 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:02.392472+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66248704 unmapped: 720896 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:03.392575+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66256896 unmapped: 712704 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:04.392709+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66256896 unmapped: 712704 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:05.392809+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66265088 unmapped: 704512 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:06.392945+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66265088 unmapped: 704512 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:07.393049+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66265088 unmapped: 704512 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:08.393198+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66265088 unmapped: 704512 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:09.393300+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66273280 unmapped: 696320 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:10.393404+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66273280 unmapped: 696320 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:11.393501+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66289664 unmapped: 679936 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:12.393621+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66289664 unmapped: 679936 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:13.393706+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66297856 unmapped: 671744 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:14.393803+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66297856 unmapped: 671744 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:15.393898+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66297856 unmapped: 671744 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:16.394001+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66306048 unmapped: 663552 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:17.394095+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66306048 unmapped: 663552 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:18.394209+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66314240 unmapped: 655360 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:19.394317+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66322432 unmapped: 647168 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:20.394439+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66322432 unmapped: 647168 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:21.394571+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66330624 unmapped: 638976 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:22.394676+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66330624 unmapped: 638976 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:23.394788+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66330624 unmapped: 638976 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:24.394891+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66347008 unmapped: 622592 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:25.395004+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66347008 unmapped: 622592 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:26.395114+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66355200 unmapped: 614400 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:27.395210+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66355200 unmapped: 614400 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:28.395322+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66355200 unmapped: 614400 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:29.395430+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66355200 unmapped: 614400 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:30.395540+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66355200 unmapped: 614400 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:31.395648+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66355200 unmapped: 614400 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:32.395771+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66363392 unmapped: 606208 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:33.395928+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66363392 unmapped: 606208 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:34.396022+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 589824 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:35.396112+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 589824 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:36.396249+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 589824 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:37.396339+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 581632 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:38.396475+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 581632 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:39.396570+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 573440 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:40.396674+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 573440 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:41.396767+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 573440 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:42.396887+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66404352 unmapped: 565248 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:43.396995+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66404352 unmapped: 565248 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:44.397089+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66412544 unmapped: 557056 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:45.397189+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66420736 unmapped: 548864 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:46.397295+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66420736 unmapped: 548864 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:47.397404+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66428928 unmapped: 540672 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:48.397510+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66428928 unmapped: 540672 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:49.397625+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66437120 unmapped: 532480 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:50.397738+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66437120 unmapped: 532480 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:51.397864+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66445312 unmapped: 524288 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:52.397977+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66445312 unmapped: 524288 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:53.398071+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66445312 unmapped: 524288 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:54.398171+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66453504 unmapped: 516096 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:55.398278+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66453504 unmapped: 516096 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:56.398372+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66453504 unmapped: 516096 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:57.398486+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66461696 unmapped: 507904 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 5518 writes, 23K keys, 5518 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5518 writes, 814 syncs, 6.78 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5515 writes, 23K keys, 5515 commit groups, 1.0 writes per commit group, ingest: 18.24 MB, 0.03 MB/s
                                           Interval WAL: 5515 writes, 813 syncs, 6.78 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fea48511f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fea48511f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fea48511f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fea48511f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fea48511f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fea48511f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fea48511f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fea4851090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 3e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fea4851090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 3e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fea4851090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 3e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fea48511f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fea48511f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:58.398582+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 442368 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:59.398685+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66535424 unmapped: 434176 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:00.398783+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66535424 unmapped: 434176 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:01.398882+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66535424 unmapped: 434176 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:02.399003+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 425984 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:03.399101+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 417792 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:04.399210+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66560000 unmapped: 409600 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:05.400734+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66568192 unmapped: 401408 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:06.400863+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66568192 unmapped: 401408 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:07.400967+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66568192 unmapped: 401408 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:08.401071+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66576384 unmapped: 393216 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:09.401164+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66584576 unmapped: 385024 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:10.401266+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66592768 unmapped: 376832 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:11.401364+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66600960 unmapped: 368640 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:12.401472+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66609152 unmapped: 360448 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:13.401581+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66609152 unmapped: 360448 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:14.401694+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66609152 unmapped: 360448 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:15.401788+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66617344 unmapped: 352256 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:16.401915+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66617344 unmapped: 352256 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:17.402023+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66617344 unmapped: 352256 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:18.402165+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66625536 unmapped: 344064 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:19.402274+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66625536 unmapped: 344064 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:20.402390+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66633728 unmapped: 335872 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:21.402494+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66633728 unmapped: 335872 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:22.402615+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66633728 unmapped: 335872 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:23.402683+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66633728 unmapped: 335872 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:24.402799+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66633728 unmapped: 335872 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:25.402988+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66641920 unmapped: 327680 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:26.403078+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66641920 unmapped: 327680 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:27.403184+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66641920 unmapped: 327680 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:28.403306+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66650112 unmapped: 319488 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:29.403402+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66658304 unmapped: 311296 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:30.403517+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66658304 unmapped: 311296 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:31.403613+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66666496 unmapped: 303104 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:32.403742+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66666496 unmapped: 303104 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:33.403881+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66666496 unmapped: 303104 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:34.404016+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66674688 unmapped: 294912 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:35.404156+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66674688 unmapped: 294912 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:36.404284+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 286720 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:37.404429+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 286720 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:38.404527+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66691072 unmapped: 278528 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:39.404628+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66691072 unmapped: 278528 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:40.404741+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66691072 unmapped: 278528 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:41.404857+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66699264 unmapped: 270336 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:42.404969+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66699264 unmapped: 270336 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 351.555572510s of 351.562835693s, submitted: 6
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:43.405069+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 114688 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:44.405176+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67010560 unmapped: 1007616 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:45.405276+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67010560 unmapped: 1007616 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:46.405444+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67010560 unmapped: 1007616 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:47.405585+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67010560 unmapped: 1007616 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:48.405694+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67026944 unmapped: 991232 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:49.405801+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67026944 unmapped: 991232 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:50.405938+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 983040 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:51.406068+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 983040 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:52.406193+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 983040 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:53.406310+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 974848 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:54.406408+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 974848 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:55.406505+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67026944 unmapped: 991232 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:56.406620+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 983040 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:57.406744+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 983040 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:58.406855+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 974848 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:59.406994+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 974848 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:00.407103+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67051520 unmapped: 966656 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:01.407206+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67051520 unmapped: 966656 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:02.407326+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67051520 unmapped: 966656 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:03.407493+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67059712 unmapped: 958464 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:04.407588+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67059712 unmapped: 958464 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:05.407686+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67067904 unmapped: 950272 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:06.407824+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67067904 unmapped: 950272 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:07.407936+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67067904 unmapped: 950272 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:08.408039+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 942080 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:09.408143+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67084288 unmapped: 933888 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:10.408246+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67092480 unmapped: 925696 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:11.408365+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 917504 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:12.408488+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 909312 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:13.408583+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 909312 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:14.408687+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 909312 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:15.408777+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 909312 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:16.408922+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 901120 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:17.409023+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 901120 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:18.409120+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 901120 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:19.409214+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67125248 unmapped: 892928 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:20.409322+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67125248 unmapped: 892928 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:21.409501+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 884736 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:22.409678+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 884736 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:23.409778+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 884736 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:24.409892+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 884736 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:25.410022+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 876544 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:26.411137+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 876544 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:27.411240+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 868352 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:28.411363+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 868352 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:29.411503+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 868352 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:30.411621+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67158016 unmapped: 860160 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:31.411737+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67158016 unmapped: 860160 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:32.411865+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67158016 unmapped: 860160 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:33.411972+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67174400 unmapped: 843776 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:34.412074+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67174400 unmapped: 843776 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:35.412209+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 835584 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:36.416079+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 835584 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:37.416226+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 835584 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:38.416392+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 827392 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:39.416543+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 827392 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:40.416722+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 819200 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:41.416861+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 819200 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:42.417005+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 819200 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:43.417108+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 811008 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:44.417216+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 811008 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:45.417335+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 802816 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:46.417458+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 802816 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:47.417555+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 802816 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:48.417672+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67223552 unmapped: 794624 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:49.417787+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67223552 unmapped: 794624 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:50.417967+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67223552 unmapped: 794624 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:51.418104+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67231744 unmapped: 786432 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:52.418277+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67231744 unmapped: 786432 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:53.418448+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67231744 unmapped: 786432 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:54.418600+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67248128 unmapped: 770048 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:55.418707+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67248128 unmapped: 770048 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:56.418853+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67256320 unmapped: 761856 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:57.418987+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67256320 unmapped: 761856 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:58.419121+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67256320 unmapped: 761856 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:59.419254+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67256320 unmapped: 761856 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:00.419420+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67256320 unmapped: 761856 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:01.419556+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67256320 unmapped: 761856 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:02.419723+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67256320 unmapped: 761856 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:03.419878+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67256320 unmapped: 761856 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:04.419979+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67256320 unmapped: 761856 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:05.420144+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67256320 unmapped: 761856 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:06.420309+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67256320 unmapped: 761856 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:07.420402+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67264512 unmapped: 753664 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:08.420513+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67264512 unmapped: 753664 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:09.420615+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67264512 unmapped: 753664 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:10.420712+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67264512 unmapped: 753664 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:11.420836+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 737280 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:12.420941+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 737280 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:13.421033+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 737280 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:14.421121+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 737280 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:15.421212+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 737280 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:16.421298+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 737280 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:17.421393+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 737280 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:18.421512+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 729088 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:19.421675+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 729088 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:20.421790+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 729088 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:21.421897+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 729088 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:22.422014+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 729088 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:23.422137+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 729088 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:24.422227+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 729088 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:25.422322+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 729088 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:26.422442+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 729088 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:27.422577+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 729088 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:28.422709+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 729088 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:29.422847+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 729088 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:30.422972+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 729088 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:31.423094+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 729088 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:32.423221+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 729088 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:33.423377+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 729088 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:34.423477+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 720896 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:35.423568+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 720896 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:36.423678+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 720896 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:37.423817+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 720896 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:38.423918+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 720896 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:39.424043+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67305472 unmapped: 712704 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:40.424179+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67305472 unmapped: 712704 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:41.424317+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67305472 unmapped: 712704 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:42.424468+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67305472 unmapped: 712704 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:43.424657+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67305472 unmapped: 712704 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:44.424888+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67305472 unmapped: 712704 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:45.425021+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67305472 unmapped: 712704 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:46.425147+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67305472 unmapped: 712704 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:47.425318+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67305472 unmapped: 712704 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:48.425714+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67305472 unmapped: 712704 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:49.425926+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67313664 unmapped: 704512 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:50.426130+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67313664 unmapped: 704512 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:51.426338+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67313664 unmapped: 704512 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:52.426570+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67313664 unmapped: 704512 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:53.426788+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67313664 unmapped: 704512 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:54.427006+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67313664 unmapped: 704512 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:55.427178+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67313664 unmapped: 704512 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:56.427340+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67313664 unmapped: 704512 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:57.427480+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67313664 unmapped: 704512 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:58.427615+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67313664 unmapped: 704512 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:59.427817+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67313664 unmapped: 704512 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:00.427983+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67313664 unmapped: 704512 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:01.428131+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67313664 unmapped: 704512 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:02.428372+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67313664 unmapped: 704512 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:03.428585+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67321856 unmapped: 696320 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:04.428744+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67321856 unmapped: 696320 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:05.428885+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67321856 unmapped: 696320 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:06.428999+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67321856 unmapped: 696320 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:07.429118+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67321856 unmapped: 696320 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:08.429251+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67321856 unmapped: 696320 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:09.429397+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67321856 unmapped: 696320 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:10.429514+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67321856 unmapped: 696320 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:11.429604+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 688128 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:12.429733+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 688128 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:13.429852+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 688128 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:14.429982+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 688128 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:15.430119+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 688128 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:16.430255+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 688128 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:17.430399+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 688128 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:18.430518+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 688128 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:19.430674+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 688128 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:20.430771+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67338240 unmapped: 679936 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:21.430900+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67338240 unmapped: 679936 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:22.431024+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67338240 unmapped: 679936 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:23.431149+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67338240 unmapped: 679936 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:24.431278+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67338240 unmapped: 679936 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:25.431420+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67338240 unmapped: 679936 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:26.431584+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67338240 unmapped: 679936 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:27.431716+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67338240 unmapped: 679936 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:28.431824+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67338240 unmapped: 679936 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:29.431966+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67338240 unmapped: 679936 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:30.432104+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67338240 unmapped: 679936 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:31.432207+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67338240 unmapped: 679936 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:32.432370+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67338240 unmapped: 679936 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:33.432509+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67338240 unmapped: 679936 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:34.432611+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67338240 unmapped: 679936 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:35.432742+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67338240 unmapped: 679936 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:36.432849+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67338240 unmapped: 679936 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:37.432982+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67338240 unmapped: 679936 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:38.433116+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67338240 unmapped: 679936 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:39.433239+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67346432 unmapped: 671744 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:40.433377+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67346432 unmapped: 671744 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:41.433470+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67346432 unmapped: 671744 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:42.433694+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67346432 unmapped: 671744 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:43.433847+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67346432 unmapped: 671744 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:44.433983+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67346432 unmapped: 671744 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:45.434085+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67346432 unmapped: 671744 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:46.434183+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67346432 unmapped: 671744 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:47.434285+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67346432 unmapped: 671744 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:48.434391+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67346432 unmapped: 671744 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:49.434502+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67346432 unmapped: 671744 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:50.434679+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67346432 unmapped: 671744 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:51.434804+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67346432 unmapped: 671744 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:52.434940+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67346432 unmapped: 671744 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:53.435064+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67354624 unmapped: 663552 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:54.435195+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67354624 unmapped: 663552 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:55.435325+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67354624 unmapped: 663552 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:56.435467+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67354624 unmapped: 663552 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:57.435616+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67354624 unmapped: 663552 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:58.435740+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67362816 unmapped: 655360 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:59.435850+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67362816 unmapped: 655360 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:00.436017+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67362816 unmapped: 655360 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:01.436181+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67362816 unmapped: 655360 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:02.436338+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67362816 unmapped: 655360 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:03.436474+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67362816 unmapped: 655360 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:04.436597+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67362816 unmapped: 655360 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:05.436720+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67362816 unmapped: 655360 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:06.436845+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67362816 unmapped: 655360 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:07.436967+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67362816 unmapped: 655360 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:08.437091+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67362816 unmapped: 655360 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:09.437225+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67371008 unmapped: 647168 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:10.437368+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67371008 unmapped: 647168 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:11.437491+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67371008 unmapped: 647168 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:12.437611+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67371008 unmapped: 647168 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:13.437744+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67371008 unmapped: 647168 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:14.437874+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67371008 unmapped: 647168 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:15.438031+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67371008 unmapped: 647168 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:16.438192+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67371008 unmapped: 647168 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:17.438362+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67371008 unmapped: 647168 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:18.438517+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67371008 unmapped: 647168 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:19.438662+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67371008 unmapped: 647168 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:20.438817+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67371008 unmapped: 647168 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:21.438936+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67371008 unmapped: 647168 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:22.439071+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67371008 unmapped: 647168 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:23.439192+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67371008 unmapped: 647168 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:24.439302+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67371008 unmapped: 647168 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:25.439424+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67371008 unmapped: 647168 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:26.439547+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67371008 unmapped: 647168 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:27.439668+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67371008 unmapped: 647168 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:28.439786+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67379200 unmapped: 638976 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:29.439910+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67379200 unmapped: 638976 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:30.440011+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67379200 unmapped: 638976 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:31.440172+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67379200 unmapped: 638976 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:32.440343+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67379200 unmapped: 638976 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:33.440695+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67379200 unmapped: 638976 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:34.440857+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67387392 unmapped: 630784 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:35.440988+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67387392 unmapped: 630784 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:36.441114+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67387392 unmapped: 630784 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:37.441237+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67387392 unmapped: 630784 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:38.441375+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67387392 unmapped: 630784 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:39.441529+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67387392 unmapped: 630784 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:40.441667+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67387392 unmapped: 630784 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:41.441794+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67387392 unmapped: 630784 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:42.441968+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 622592 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:43.442121+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 622592 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:44.442257+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 622592 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:45.442388+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 622592 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:46.442545+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 622592 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:47.443101+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 622592 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:48.443244+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 622592 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:49.443400+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 622592 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:50.443506+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 622592 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:51.443603+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 622592 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:52.443746+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 622592 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:53.443844+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 622592 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:54.443953+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 622592 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:55.444073+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 622592 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:56.444217+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 622592 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:57.444349+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 622592 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:58.444480+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 622592 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:59.444921+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 622592 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:00.445056+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 622592 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:01.445159+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 622592 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:02.445345+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: mgrc ms_handle_reset ms_handle_reset con 0x55fea5baf000
Nov 26 11:59:02 compute-0 ceph-osd[90047]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/981219021
Nov 26 11:59:02 compute-0 ceph-osd[90047]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/981219021,v1:192.168.122.100:6801/981219021]
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: get_auth_request con 0x55fea8032400 auth_method 0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: mgrc handle_mgr_configure stats_period=5
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67641344 unmapped: 376832 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:03.445484+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67624960 unmapped: 393216 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:04.445658+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67624960 unmapped: 393216 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:05.445792+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67624960 unmapped: 393216 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:06.445927+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67624960 unmapped: 393216 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:07.446057+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67624960 unmapped: 393216 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:08.446236+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 385024 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:09.446399+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 385024 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:10.446543+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 385024 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:11.446684+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 385024 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:12.446835+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 385024 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:13.446976+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 385024 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:14.447134+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 385024 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:15.447293+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 385024 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:16.447428+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 385024 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:17.447569+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 385024 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:18.447738+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 385024 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:19.447856+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 385024 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:20.448018+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 385024 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:21.448168+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 385024 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:22.448351+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 385024 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:23.448513+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 385024 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:24.448658+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 385024 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:25.448811+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 385024 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:26.448969+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 385024 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:27.449134+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 385024 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:28.449267+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 385024 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:29.449432+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 385024 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:30.449595+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 385024 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:31.449717+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 385024 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:32.449909+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 385024 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:33.450042+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 385024 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:34.450208+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 385024 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:35.450363+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 385024 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:36.450480+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 385024 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:37.450608+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 385024 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:38.450736+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 385024 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:39.450864+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 385024 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:40.450955+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 385024 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:41.451067+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 385024 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:42.451204+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 385024 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:43.451325+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 385024 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:44.451443+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 385024 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:45.451567+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 385024 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:46.451689+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 385024 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:47.451817+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 385024 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:48.451915+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67641344 unmapped: 376832 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:49.452060+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67641344 unmapped: 376832 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:50.452185+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67641344 unmapped: 376832 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:51.452322+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67641344 unmapped: 376832 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:52.452465+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67641344 unmapped: 376832 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:53.452577+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67641344 unmapped: 376832 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:54.452686+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67641344 unmapped: 376832 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:55.452852+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67641344 unmapped: 376832 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:56.452990+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67641344 unmapped: 376832 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:57.453124+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67641344 unmapped: 376832 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:58.453276+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67641344 unmapped: 376832 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:59.453407+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67641344 unmapped: 376832 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:00.453577+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67641344 unmapped: 376832 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:01.453787+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67641344 unmapped: 376832 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:02.454004+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67641344 unmapped: 376832 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:03.454165+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67641344 unmapped: 376832 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:04.454308+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67641344 unmapped: 376832 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:05.454516+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67641344 unmapped: 376832 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:06.454715+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67641344 unmapped: 376832 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:07.454928+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67641344 unmapped: 376832 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:08.455108+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67641344 unmapped: 376832 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:09.455292+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67641344 unmapped: 376832 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:10.455469+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67641344 unmapped: 376832 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:11.455679+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67657728 unmapped: 360448 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:12.455846+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67657728 unmapped: 360448 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:13.456012+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67657728 unmapped: 360448 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:14.456208+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67657728 unmapped: 360448 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:15.456388+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67657728 unmapped: 360448 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:16.456600+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67657728 unmapped: 360448 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:17.456777+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67657728 unmapped: 360448 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:18.456952+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67657728 unmapped: 360448 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:19.457124+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67657728 unmapped: 360448 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:20.457350+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67657728 unmapped: 360448 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:21.457568+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67657728 unmapped: 360448 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:22.457809+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67657728 unmapped: 360448 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:23.457990+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67657728 unmapped: 360448 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:24.458174+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67657728 unmapped: 360448 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:25.458349+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67657728 unmapped: 360448 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:26.458563+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67657728 unmapped: 360448 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:27.458720+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67657728 unmapped: 360448 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:28.458882+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67657728 unmapped: 360448 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:29.459094+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67657728 unmapped: 360448 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:30.459306+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67657728 unmapped: 360448 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:31.459469+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67657728 unmapped: 360448 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:32.459694+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 352256 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:33.459831+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 352256 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:34.459996+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:35.460220+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 352256 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:36.460410+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 352256 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:37.460606+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 352256 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:38.460797+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 352256 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:39.460983+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 352256 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:40.461183+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 352256 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:41.461356+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 352256 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:42.461561+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 352256 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:43.461743+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 352256 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:44.461960+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 352256 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:45.462212+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 352256 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:46.462386+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 352256 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:47.462551+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 352256 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:48.462708+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 352256 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:49.462867+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 352256 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:50.462963+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 352256 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:51.463082+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 352256 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:52.463224+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 352256 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:53.463334+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 352256 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:54.463458+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 352256 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:55.463577+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 352256 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:56.463703+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 352256 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:57.463824+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 352256 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:58.463982+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 352256 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:59.464119+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 352256 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:00.464235+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 352256 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:01.464337+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 352256 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:02.464447+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 352256 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:03.464717+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 352256 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:04.464818+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67674112 unmapped: 344064 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:05.464949+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67674112 unmapped: 344064 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:06.465079+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67674112 unmapped: 344064 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:07.465222+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67674112 unmapped: 344064 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:08.465355+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67674112 unmapped: 344064 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:09.465461+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67674112 unmapped: 344064 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:10.465566+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67674112 unmapped: 344064 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:11.465686+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67674112 unmapped: 344064 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:12.465844+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67674112 unmapped: 344064 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:13.465987+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67674112 unmapped: 344064 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:14.466142+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67682304 unmapped: 335872 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:15.466267+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67682304 unmapped: 335872 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:16.466396+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67682304 unmapped: 335872 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:17.466527+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67682304 unmapped: 335872 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:18.466696+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67682304 unmapped: 335872 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:19.466817+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67682304 unmapped: 335872 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:20.466931+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67682304 unmapped: 335872 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:21.467060+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67682304 unmapped: 335872 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:22.467223+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67682304 unmapped: 335872 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:23.467352+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67682304 unmapped: 335872 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:24.467518+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67682304 unmapped: 335872 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:25.467678+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67682304 unmapped: 335872 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:26.467815+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67682304 unmapped: 335872 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:27.467944+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67682304 unmapped: 335872 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:28.468060+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 327680 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:29.468211+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 327680 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:30.468348+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 327680 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:31.468478+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 327680 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:32.468675+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 327680 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:33.468794+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 327680 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:34.468906+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 327680 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:35.469014+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 327680 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:36.469141+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 327680 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:37.469261+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 327680 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:38.469414+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 327680 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:39.469561+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 327680 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:40.470180+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 327680 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:41.470315+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 327680 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:42.470565+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 327680 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:43.470726+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 327680 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:44.470891+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 327680 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:45.471028+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 327680 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:46.471148+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 327680 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:47.471297+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 327680 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:48.471414+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 327680 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:49.471550+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 327680 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:50.471691+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 327680 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:51.471826+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 327680 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:52.471971+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 327680 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:53.472092+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 327680 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:54.472232+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 327680 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:55.472367+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 327680 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:56.472465+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 327680 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:57.472603+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 327680 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:58.472716+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 327680 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:59.472838+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 327680 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:00.472963+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 327680 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:01.473067+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 327680 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:02.473215+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 327680 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:03.473358+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 327680 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:04.473501+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 327680 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:05.473609+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 327680 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:06.473692+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 327680 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:07.473823+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 327680 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:08.474425+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 327680 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:09.474538+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 327680 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:10.474646+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 327680 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:11.474742+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67698688 unmapped: 319488 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:12.474919+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67698688 unmapped: 319488 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:13.475088+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67698688 unmapped: 319488 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:14.475203+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67698688 unmapped: 319488 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:15.475323+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67698688 unmapped: 319488 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:16.475460+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67698688 unmapped: 319488 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:17.475583+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67698688 unmapped: 319488 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:18.475705+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 311296 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:19.475816+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 311296 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:20.476066+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 311296 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:21.476183+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 311296 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:22.476326+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 311296 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:23.476480+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 311296 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:24.477159+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 311296 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:25.477287+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 311296 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:26.477401+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 311296 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:27.477513+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 311296 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:28.477661+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 311296 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:29.477786+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 311296 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:30.477907+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 311296 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:31.478022+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 311296 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:32.478172+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 311296 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:33.478292+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 311296 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:34.478414+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 311296 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:35.478544+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 311296 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:36.478675+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 311296 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:37.478771+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 311296 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:38.478885+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 311296 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:39.479024+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 311296 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:40.479199+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 311296 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:41.479329+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 311296 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:42.479473+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 311296 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:43.479602+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 311296 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:44.479732+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 311296 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:45.479868+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 311296 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:46.480003+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 311296 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:47.480140+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 311296 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:48.480259+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 311296 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:49.480395+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 311296 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:50.480523+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 311296 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:51.480674+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 311296 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:52.480794+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 311296 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:53.480912+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 311296 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:54.481012+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 311296 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:55.481177+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 311296 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:56.481354+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 311296 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:57.481482+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 311296 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:58.481617+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 311296 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:59.481783+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 311296 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:00.481928+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 311296 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:01.482069+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 311296 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:02.482211+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 311296 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:03.482357+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67715072 unmapped: 303104 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:04.482512+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67723264 unmapped: 294912 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:05.482657+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67723264 unmapped: 294912 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:06.482778+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67723264 unmapped: 294912 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:07.482927+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67723264 unmapped: 294912 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:08.483054+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67723264 unmapped: 294912 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:09.483194+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67723264 unmapped: 294912 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:10.483289+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67723264 unmapped: 294912 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:11.483387+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67731456 unmapped: 286720 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:12.483531+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67731456 unmapped: 286720 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:13.483652+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67731456 unmapped: 286720 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:14.484106+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67731456 unmapped: 286720 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:15.484216+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67731456 unmapped: 286720 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:16.484335+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67731456 unmapped: 286720 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:17.484430+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67731456 unmapped: 286720 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:18.484522+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67731456 unmapped: 286720 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:19.484619+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67739648 unmapped: 278528 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:20.484740+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67739648 unmapped: 278528 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:21.484840+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67739648 unmapped: 278528 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:22.484950+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67739648 unmapped: 278528 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:23.485042+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67739648 unmapped: 278528 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:24.485181+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67739648 unmapped: 278528 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:25.485292+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67739648 unmapped: 278528 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:26.485390+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67739648 unmapped: 278528 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:27.485489+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67739648 unmapped: 278528 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:28.485583+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 67739648 unmapped: 278528 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:29.485686+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: do_command 'config diff' '{prefix=config diff}'
Nov 26 11:59:02 compute-0 ceph-osd[90047]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 26 11:59:02 compute-0 ceph-osd[90047]: do_command 'config show' '{prefix=config show}'
Nov 26 11:59:02 compute-0 ceph-osd[90047]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 26 11:59:02 compute-0 ceph-osd[90047]: do_command 'counter dump' '{prefix=counter dump}'
Nov 26 11:59:02 compute-0 ceph-osd[90047]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 942080 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: do_command 'counter schema' '{prefix=counter schema}'
Nov 26 11:59:02 compute-0 ceph-osd[90047]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:30.485792+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcacf000/0x0/0x4ffc00000, data 0xaa540/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:02 compute-0 ceph-osd[90047]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 68411392 unmapped: 1703936 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: bluestore.MempoolThread(0x55fea492fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783699 data_alloc: 218103808 data_used: 200704
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: tick
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_tickets
Nov 26 11:59:02 compute-0 ceph-osd[90047]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:31.485908+0000)
Nov 26 11:59:02 compute-0 ceph-osd[90047]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 1638400 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:02 compute-0 ceph-osd[90047]: do_command 'log dump' '{prefix=log dump}'
Nov 26 11:59:02 compute-0 ceph-mgr[75197]: log_channel(audit) log [DBG] : from='client.14463 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:59:02 compute-0 rsyslogd[960]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 11:59:02 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 26 11:59:02 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2634582213' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 26 11:59:02 compute-0 rsyslogd[960]: imjournal from <np0005536539:ceph-osd>: begin to drop messages due to rate-limiting
Nov 26 11:59:02 compute-0 ceph-mgr[75197]: log_channel(audit) log [DBG] : from='client.14465 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:59:02 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 26 11:59:02 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3431116299' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 26 11:59:02 compute-0 ceph-mon[74928]: from='client.14447 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:59:02 compute-0 ceph-mon[74928]: from='client.14451 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:59:02 compute-0 ceph-mon[74928]: from='client.14455 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:59:02 compute-0 ceph-mon[74928]: pgmap v714: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:59:02 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/571159541' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 26 11:59:02 compute-0 ceph-mon[74928]: from='client.14459 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:59:02 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/2634582213' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 26 11:59:02 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3431116299' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 26 11:59:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:59:02.991 159928 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 11:59:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:59:02.991 159928 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 11:59:02 compute-0 ovn_metadata_agent[159923]: 2025-11-26 11:59:02.991 159928 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 11:59:03 compute-0 ceph-mgr[75197]: log_channel(audit) log [DBG] : from='client.14469 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 26 11:59:03 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Nov 26 11:59:03 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/959128951' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 26 11:59:03 compute-0 ceph-mgr[75197]: log_channel(audit) log [DBG] : from='client.14473 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 26 11:59:03 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v715: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:59:03 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0) v1
Nov 26 11:59:03 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/349661167' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 26 11:59:03 compute-0 ceph-mon[74928]: from='client.14463 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:59:03 compute-0 ceph-mon[74928]: from='client.14465 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:59:03 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/959128951' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 26 11:59:03 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/349661167' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 26 11:59:03 compute-0 ceph-mgr[75197]: log_channel(audit) log [DBG] : from='client.14481 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 26 11:59:03 compute-0 ceph-ebab460c-3fd7-5f66-aa87-e10c143123f7-mgr-compute-0-mwrktr[75193]: 2025-11-26T11:59:03.929+0000 7fc9b4913640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 26 11:59:03 compute-0 ceph-mgr[75197]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 26 11:59:04 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Nov 26 11:59:04 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1413393152' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 26 11:59:04 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Nov 26 11:59:04 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2264861806' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 26 11:59:04 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Nov 26 11:59:04 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1055174056' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 26 11:59:04 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Nov 26 11:59:04 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/644188017' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 26 11:59:04 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Nov 26 11:59:04 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3288191844' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 26 11:59:04 compute-0 ceph-mon[74928]: from='client.14469 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 26 11:59:04 compute-0 ceph-mon[74928]: from='client.14473 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 26 11:59:04 compute-0 ceph-mon[74928]: pgmap v715: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:59:04 compute-0 ceph-mon[74928]: from='client.14481 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 26 11:59:04 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1413393152' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 26 11:59:04 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/2264861806' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 26 11:59:04 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1055174056' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 26 11:59:04 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/644188017' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 26 11:59:04 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3288191844' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 26 11:59:04 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Nov 26 11:59:04 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1340947628' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 26 11:59:05 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Nov 26 11:59:05 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2257045262' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 26 11:59:05 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Nov 26 11:59:05 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2096776179' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 26 11:59:05 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Nov 26 11:59:05 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1911732351' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 26 11:59:05 compute-0 crontab[256901]: (root) LIST (root)
Nov 26 11:59:05 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Nov 26 11:59:05 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/883307655' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 26 11:59:05 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v716: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:59:05 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Nov 26 11:59:05 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1771019484' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.1f( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.070971 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.1f( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.071149 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.1f( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 9.071164 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.1f( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.1f] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.1f( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937932968s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 110.142341614s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.1f] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.062045 11 0.000023
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.071136 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.1f( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937918663s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142341614s@ mbc={}] exit Reset 0.000025 1 0.000040
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.071179 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.1f( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937918663s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142341614s@ mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 9.071194 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.1f( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937918663s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142341614s@ mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.1f( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937918663s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142341614s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.1f( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937918663s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142341614s@ mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.1f( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937918663s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142341614s@ mbc={}] enter Started/Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937891006s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 110.142333984s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937874794s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142333984s@ mbc={}] exit Reset 0.000026 1 0.000039
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937874794s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142333984s@ mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937874794s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142333984s@ mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937874794s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142333984s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937874794s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142333984s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937874794s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142333984s@ mbc={}] enter Started/Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.060407 17 0.000042
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.070670 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.070712 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.070726 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.11] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.939662933s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active pruub 108.144172668s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.1e( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.026681 4 0.000017
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.1e( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.033295 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.11] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.1e( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.033367 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.1e( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.033381 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.939648628s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.144172668s@ mbc={}] exit Reset 0.000022 1 0.000033
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.1e( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.939648628s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.144172668s@ mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.1e] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.939648628s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.144172668s@ mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.939648628s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.144172668s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.1e( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.972956657s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 104.177490234s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.939648628s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.144172668s@ mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.939648628s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.144172668s@ mbc={}] enter Started/Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.1e] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.1e( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.972943306s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177490234s@ mbc={}] exit Reset 0.000022 1 0.000034
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.1e( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.972943306s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177490234s@ mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.1e( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.972943306s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177490234s@ mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.d( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.933835983s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.136222839s@ mbc={}] enter Started/Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.1d( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=33'4 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.061920 11 0.000020
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.1d( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.070701 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.1d( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.070859 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.1d( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 9.070872 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.1d( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.1d] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.1d( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937980652s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 110.142601013s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.1d] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.1d( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937968254s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142601013s@ mbc={}] exit Reset 0.000034 1 0.000033
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.1d( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937968254s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142601013s@ mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.1d( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937968254s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142601013s@ mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.1d( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937968254s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142601013s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.1d( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937968254s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142601013s@ mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.1d( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937968254s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142601013s@ mbc={}] enter Started/Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.1f( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.026803 4 0.000021
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.1f( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.033421 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.1f( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.033483 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.1f( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.033497 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.1f( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.1f] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.1f( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.972817421s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 104.177497864s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.062083 11 0.000024
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.071142 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.1f] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.071217 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 9.071240 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.1f( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.972800255s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177497864s@ mbc={}] exit Reset 0.000029 1 0.000041
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.1f( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.972800255s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177497864s@ mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.1f( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.972800255s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177497864s@ mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.1f( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.972800255s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177497864s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1d] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.1f( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.972800255s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177497864s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937825203s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 110.142539978s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.1f( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.972800255s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177497864s@ mbc={}] enter Started/Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1d] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937811852s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142539978s@ mbc={}] exit Reset 0.000023 1 0.000037
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937811852s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142539978s@ mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937811852s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142539978s@ mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937811852s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142539978s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937811852s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142539978s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937811852s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142539978s@ mbc={}] enter Started/Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.060681 17 0.000033
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.070944 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.070984 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.070998 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=33'4 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.062079 11 0.000026
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.070723 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.070759 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 9.070772 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.13] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.939379692s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active pruub 108.144195557s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.13] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.1c] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.939366341s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.144195557s@ mbc={}] exit Reset 0.000024 1 0.000035
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937810898s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 110.142639160s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.939366341s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.144195557s@ mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.939366341s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.144195557s@ mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.1c] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.939366341s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.144195557s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.939366341s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.144195557s@ mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.939366341s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.144195557s@ mbc={}] enter Started/Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937796593s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142639160s@ mbc={}] exit Reset 0.000024 1 0.000039
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937796593s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142639160s@ mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937796593s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142639160s@ mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937796593s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142639160s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937796593s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142639160s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937796593s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142639160s@ mbc={}] enter Started/Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.11( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.027009 4 0.000019
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.11( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.033584 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.11( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.033620 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.11( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.033633 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.11( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.1e( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.972943306s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177490234s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.11] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.1e( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.972943306s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177490234s@ mbc={}] exit Start 0.000333 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.11( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.972625732s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 104.177513123s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.1e( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.972943306s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177490234s@ mbc={}] enter Started/Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.11] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.11( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.972613335s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177513123s@ mbc={}] exit Reset 0.000021 1 0.000034
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.11( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.972613335s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177513123s@ mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.11( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.972613335s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177513123s@ mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.11( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.972613335s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177513123s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.11( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.972613335s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177513123s@ mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.11( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.972613335s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177513123s@ mbc={}] enter Started/Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.062378 11 0.000027
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.071442 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.071483 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 9.071495 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937573433s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 110.142532349s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937561035s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142532349s@ mbc={}] exit Reset 0.000020 1 0.000031
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937561035s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142532349s@ mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937561035s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142532349s@ mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937561035s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142532349s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937561035s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142532349s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937561035s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142532349s@ mbc={}] enter Started/Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=33'4 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.062412 11 0.000024
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.071422 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.071458 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 9.071474 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.12( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.027136 4 0.000014
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.12( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.033671 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.12( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.033725 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.12( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.033738 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.12( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.12] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937503815s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 110.142539978s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.12] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.12( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.972482681s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 104.177513123s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.12] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.12] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.12( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.972465515s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177513123s@ mbc={}] exit Reset 0.000026 1 0.000039
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.12( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.972465515s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177513123s@ mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.12( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.972465515s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177513123s@ mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.12( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.972465515s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177513123s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.12( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.972465515s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177513123s@ mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.12( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.972465515s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177513123s@ mbc={}] enter Started/Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937141418s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 110.142250061s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937415123s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142539978s@ mbc={}] exit Reset 0.000102 1 0.000117
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937127113s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142250061s@ mbc={}] exit Reset 0.001004 1 0.001015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937127113s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142250061s@ mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937127113s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142250061s@ mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937127113s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142250061s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937415123s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142539978s@ mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937127113s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142250061s@ mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937415123s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142539978s@ mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937127113s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142250061s@ mbc={}] enter Started/Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937415123s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142539978s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937415123s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142539978s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937415123s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142539978s@ mbc={}] enter Started/Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=33'4 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.062586 11 0.000022
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.071520 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.071556 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 9.071575 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.11] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937284470s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 110.142539978s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.11] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937266350s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142539978s@ mbc={}] exit Reset 0.000061 1 0.000072
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937266350s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142539978s@ mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937266350s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142539978s@ mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937266350s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142539978s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937266350s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142539978s@ mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.937266350s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142539978s@ mbc={}] enter Started/Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.9( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.027519 4 0.000017
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.9( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.034006 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.9( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.034043 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.9( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.034056 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.9( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.9] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.9( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.972103119s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 104.177536011s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.9] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.9( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.972084999s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177536011s@ mbc={}] exit Reset 0.000032 1 0.000048
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.9( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.972084999s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177536011s@ mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.9( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.972084999s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177536011s@ mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.9( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.972084999s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177536011s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.9( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.972084999s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177536011s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.9( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.972084999s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177536011s@ mbc={}] enter Started/Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.061528 17 0.000030
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.071940 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.072002 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.072019 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.5] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.938540459s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active pruub 108.144149780s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.5] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.938523293s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.144149780s@ mbc={}] exit Reset 0.000027 1 0.000042
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.938523293s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.144149780s@ mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.938523293s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.144149780s@ mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.938523293s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.144149780s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.938523293s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.144149780s@ mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.938523293s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.144149780s@ mbc={}] enter Started/Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.063068 11 0.000024
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.071983 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.072024 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 9.072039 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.b] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.936826706s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 110.142555237s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.b] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.936812401s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142555237s@ mbc={}] exit Reset 0.000025 1 0.000038
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.936812401s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142555237s@ mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.936812401s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142555237s@ mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.936812401s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142555237s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.936812401s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142555237s@ mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.936812401s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142555237s@ mbc={}] enter Started/Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.19( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.027853 4 0.000016
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.19( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.034345 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.19( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.034382 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.19( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.034394 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.19( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.19] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.19( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.971720695s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 104.177536011s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.19] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.19( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.971708298s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177536011s@ mbc={}] exit Reset 0.000024 1 0.000035
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.19( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.971708298s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177536011s@ mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.19( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.971708298s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177536011s@ mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.19( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.971708298s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177536011s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.19( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.971708298s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177536011s@ mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.19( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.971708298s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177536011s@ mbc={}] enter Started/Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.063229 11 0.000021
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.072103 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.072139 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 9.072153 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1b] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.936662674s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 110.142562866s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1b] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.936652184s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142562866s@ mbc={}] exit Reset 0.000021 1 0.000034
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.936652184s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142562866s@ mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.936652184s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142562866s@ mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.936652184s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142562866s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.936652184s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142562866s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.936652184s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142562866s@ mbc={}] enter Started/Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.061962 17 0.000051
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.072408 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.072483 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.072495 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.15] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.938215256s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active pruub 108.144203186s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.15] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.938200951s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.144203186s@ mbc={}] exit Reset 0.000025 1 0.000037
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.938200951s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.144203186s@ mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.938200951s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.144203186s@ mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.938200951s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.144203186s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.938200951s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.144203186s@ mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.938200951s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.144203186s@ mbc={}] enter Started/Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=33'4 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.063430 11 0.000023
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.072248 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.072295 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 9.072309 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.1a] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.936482430s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 110.142585754s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.1a] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.936468124s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142585754s@ mbc={}] exit Reset 0.000026 1 0.000039
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.936468124s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142585754s@ mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.936468124s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142585754s@ mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.936468124s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142585754s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.936468124s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142585754s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.936468124s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142585754s@ mbc={}] enter Started/Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.1a( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.028268 4 0.000017
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.1a( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.034694 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.1a( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.034731 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.1a( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.034744 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.1a( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.1a] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.1a( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.971364975s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 104.177566528s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.1a] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.1a( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.971353531s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177566528s@ mbc={}] exit Reset 0.000023 1 0.000035
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.1a( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.971353531s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177566528s@ mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.1a( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.971353531s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177566528s@ mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.1a( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.971353531s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177566528s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.1a( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.971353531s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177566528s@ mbc={}] exit Start 0.000062 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.1a( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.971353531s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177566528s@ mbc={}] enter Started/Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.15( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.028482 4 0.000013
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.15( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.034902 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.15( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.034938 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.15( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.034951 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.15( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.15] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.15( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.971140862s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 104.177566528s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.15] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.15( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.971126556s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177566528s@ mbc={}] exit Reset 0.000027 1 0.000040
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.15( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.971126556s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177566528s@ mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.15( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.971126556s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177566528s@ mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.15( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.971126556s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177566528s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.15( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.971126556s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177566528s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.15( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.971126556s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177566528s@ mbc={}] enter Started/Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.063863 11 0.000026
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.072638 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.072676 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 9.072688 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.936058044s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 110.142646790s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.936043739s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142646790s@ mbc={}] exit Reset 0.000026 1 0.000039
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.936043739s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142646790s@ mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.936043739s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142646790s@ mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.936043739s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142646790s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.936043739s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142646790s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=14.936043739s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.142646790s@ mbc={}] enter Started/Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.10( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.028727 4 0.000018
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.10( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.035092 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.10( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.035126 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.10( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.035138 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.10( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.10] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.10( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.970899582s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 104.177581787s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.10] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.10( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.970885277s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177581787s@ mbc={}] exit Reset 0.000027 1 0.000038
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.10( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.970885277s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177581787s@ mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.10( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.970885277s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177581787s@ mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.10( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.970885277s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177581787s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.10( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.970885277s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177581787s@ mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[11.10( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=8.970885277s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 104.177581787s@ mbc={}] enter Started/Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.062741 17 0.000044
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.073222 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.073305 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.073319 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1c] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.937348366s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active pruub 108.144134521s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1c] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.937335014s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.144134521s@ mbc={}] exit Reset 0.000024 1 0.000042
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.937335014s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.144134521s@ mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.937335014s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.144134521s@ mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.937335014s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.144134521s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.937335014s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.144134521s@ mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.937335014s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.144134521s@ mbc={}] enter Started/Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.006844 2 0.000035
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:05 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:05 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 1441792 heap: 70279168 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 832174 data_alloc: 218103808 data_used: 204800
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.056834 17 0.000030
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.074338 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.074374 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.074387 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.6] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.942935944s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active pruub 108.151405334s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.6] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.942917824s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.151405334s@ mbc={}] exit Reset 0.000032 1 0.000049
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.942917824s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.151405334s@ mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.942917824s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.151405334s@ mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.942917824s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.151405334s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.942917824s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.151405334s@ mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=12.942917824s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.151405334s@ mbc={}] enter Started/Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[6.9( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.010901 2 0.000092
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[6.9( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[6.9( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[6.9( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:05 compute-0 ceph-osd[89074]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[6.7( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.010762 2 0.000025
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[6.7( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[6.7( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[6.7( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[6.5( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetLog 0.010653 2 0.000019
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[6.5( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[6.5( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[6.5( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:05 compute-0 ceph-osd[89074]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[6.1( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.010602 2 0.000017
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[6.1( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:05 compute-0 ceph-osd[89074]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[6.1( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[6.1( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetLog 0.010512 2 0.000016
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:05 compute-0 ceph-osd[89074]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[6.d( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetLog 0.010463 2 0.000022
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[6.d( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[6.d( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[6.d( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:05 compute-0 ceph-osd[89074]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[6.f( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/GetLog 0.010393 2 0.000015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[6.f( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=3 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[6.f( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[6.f( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=3 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.17] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.17] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1b] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.14] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.14] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1b] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.18] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.14] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.11] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1f] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.14] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.10] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.18] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.1] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1f] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.f] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.10] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.3] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.f] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.c] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.3] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.d] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.1] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.e] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.c] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.3] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.e] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1340947628' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 26 11:59:05 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/2257045262' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 26 11:59:05 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/2096776179' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 26 11:59:05 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1911732351' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 26 11:59:05 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/883307655' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 26 11:59:05 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1771019484' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.e] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.f] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.9] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.f] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.4] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.b] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.4] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.b] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.9] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.f] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.9] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.e] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.6] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.f] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.6] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.9] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.6] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.5] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.4] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.4] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.18] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.18] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.1f] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.1d] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.1d] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1d] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.13] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.13] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.9] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.6] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.1f] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 50 handle_osd_map epochs [50,50], i have 50, src has [1,50]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1a] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1a] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.15] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.15] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.15] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.15] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.3] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.3] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.c] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.c] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.2] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.2] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.d] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.d] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.8] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.8] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.2] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.2] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.d] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.d] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.9] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.9] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.5] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.5] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.8] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.8] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.4] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.4] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.a] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.a] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.15] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.15] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.1b] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.1b] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.11] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.11] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.1c] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.1c] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.1e] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.1e] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.11] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.11] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.12] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.12] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1c] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1c] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.12] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.12] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.11] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.11] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.e] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.e] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.2] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.2] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.b] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.b] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.18] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.18] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.1b] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.b] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.19] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.19] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1b] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.1a] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.1a] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.10] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.6] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.6] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.10] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.1b] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.1a] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.1a] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.1f] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.1f] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.1c] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.1c] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.19(unlocked)] enter Initial
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.19( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000022 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.19( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.19( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000011
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.19( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.19( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.19( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.19( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.19( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.19( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.19( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.19( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000055 1 0.000022
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.19( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.b(unlocked)] enter Initial
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.b( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000013 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.b( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.b( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000007
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.b( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.b( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.b( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.b( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.b( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.b( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.b( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.b( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000036 1 0.000018
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.b( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.13(unlocked)] enter Initial
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.13( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000018 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.13( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.13( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000009
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.13( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.13( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.13( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.13( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.13( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.13( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.13( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.13( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000040 1 0.000019
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.13( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.12(unlocked)] enter Initial
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.12( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000014 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.12( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.12( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000008
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.12( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.12( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.12( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.12( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.12( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.12( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.12( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.12( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000039 1 0.000017
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.12( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.11(unlocked)] enter Initial
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.11( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000044 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.11( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.11( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.11( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.11( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.11( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.11( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.11( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.11( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.11( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.11( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000053 1 0.000025
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.11( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.10(unlocked)] enter Initial
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.10( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000015 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.10( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.10( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000011
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.10( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.10( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.10( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.10( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.10( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.10( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.10( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.10( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000033 1 0.000022
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.10( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.1a(unlocked)] enter Initial
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.1a( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000019 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.1a( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.1a( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000011
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.1a( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.1a( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.1a( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.1a( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.1a( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.1a( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.1a( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.1a( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000036 1 0.000019
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.1a( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.6(unlocked)] enter Initial
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.6( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000019 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.6( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.6( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000010
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.6( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.6( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.6( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.6( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.6( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.6( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.6( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.6( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000044 1 0.000021
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.6( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.f(unlocked)] enter Initial
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.f( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000020 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.f( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.f( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000012
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.f( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.f( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.f( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.f( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.f( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.f( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.f( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.f( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000053 1 0.000025
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.f( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.2(unlocked)] enter Initial
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.2( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000017 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.2( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.2( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000040 1 0.000046
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.2( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.2( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.2( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.2( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000028 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.2( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.2( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.2( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.2( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000097 1 0.000051
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.2( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.14(unlocked)] enter Initial
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.14( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000017 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.14( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.14( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000011
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.14( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.14( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.14( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.14( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.14( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.14( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.14( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.14( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000094 1 0.000026
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.14( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:05 compute-0 ceph-osd[89074]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.003766 2 0.000025
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:05 compute-0 ceph-osd[89074]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.003700 2 0.000018
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:05 compute-0 ceph-osd[89074]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.003511 2 0.000021
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.003408 2 0.000017
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:05 compute-0 ceph-osd[89074]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.003147 2 0.000027
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:05 compute-0 ceph-osd[89074]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.003074 2 0.000022
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:05 compute-0 ceph-osd[89074]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.002951 2 0.000022
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:05 compute-0 ceph-osd[89074]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.002911 2 0.000022
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.002745 2 0.000029
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:05 compute-0 ceph-osd[89074]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.002531 2 0.000068
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:05 compute-0 ceph-osd[89074]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.14( v 49'65 lc 0'0 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.002310 2 0.000025
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.14( v 49'65 lc 0'0 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.14( v 49'65 lc 0'0 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000087 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 50 pg[10.14( v 49'65 lc 0'0 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:30.501879+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 55 sent 53 num 2 unsent 2 sending 2
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:41:00.450491+0000 osd.1 (osd.1) 54 : cluster [DBG] 5.1 scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:41:00.464609+0000 osd.1 (osd.1) 55 : cluster [DBG] 5.1 scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 55) v1
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:41:00.450491+0000 osd.1 (osd.1) 54 : cluster [DBG] 5.1 scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:41:00.464609+0000 osd.1 (osd.1) 55 : cluster [DBG] 5.1 scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 50 handle_osd_map epochs [50,51], i have 50, src has [1,51]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 69197824 unmapped: 2129920 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.009707 3 0.000023
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.009730 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000037 1 0.000054
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.985583 2 0.000013
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.989052 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.009488 3 0.000021
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.009508 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000030 1 0.000043
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.f( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.999952 2 0.000019
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.f( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering 1.010389 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.f( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 unknown m=3 mbc={}] enter Started/Primary/Active
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.008933 3 0.000037
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.008965 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000023 1 0.000033
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.d( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.000177 2 0.000020
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.d( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering 1.010692 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.f( v 37'39 lc 33'1 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 activating+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/Activating
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.d( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 unknown m=2 mbc={}] enter Started/Primary/Active
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.d( v 37'39 lc 33'10 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 activating+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.009301 3 0.000019
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.009320 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.985816 2 0.000022
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.988643 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000050 1 0.000062
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.000437 2 0.000039
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering 1.011024 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 unknown m=2 mbc={}] enter Started/Primary/Active
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 activating+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.009145 3 0.000019
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.009165 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000024 1 0.000036
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.009145 3 0.000032
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.009159 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000019 1 0.000026
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.1( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.000816 2 0.000015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.1( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.011464 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.1( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.1( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.5( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.000935 2 0.000021
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.5( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering 1.011649 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.5( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 unknown m=2 mbc={}] enter Started/Primary/Active
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.5( v 37'39 lc 33'6 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 activating+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.986667 2 0.000014
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.990422 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.7( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.001111 2 0.000032
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.7( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 1.011961 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.7( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.7( v 37'39 lc 33'15 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.986523 2 0.000013
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.989174 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.009003 3 0.000023
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.009019 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000023 1 0.000030
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.9( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.001425 2 0.000029
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.9( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.012398 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.9( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.008911 3 0.000021
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.008926 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000020 1 0.000028
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.005806 2 0.000026
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 1.012756 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.986962 2 0.000022
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.989938 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.008845 3 0.000071
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.008860 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000019 1 0.000026
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.987574 2 0.000026
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.991558 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.987448 2 0.000024
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.990547 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.008148 3 0.000019
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.008164 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000056 1 0.000063
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 51 handle_osd_map epochs [51,51], i have 51, src has [1,51]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 51 handle_osd_map epochs [51,51], i have 51, src has [1,51]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.009253 3 0.000023
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.009275 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000027 1 0.000041
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.009125 3 0.000019
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.009141 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000023 1 0.000031
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.988327 2 0.000017
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.991455 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.009032 3 0.000019
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.009046 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000020 1 0.000027
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.988551 2 0.000022
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.992124 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.008457 3 0.000020
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.008472 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000036 1 0.000044
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.008384 3 0.000020
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.008398 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000034 1 0.000057
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.14( v 49'65 lc 0'0 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.988529 2 0.000111
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.14( v 49'65 lc 0'0 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 0.991045 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.14( v 49'65 lc 0'0 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.14( v 49'65 lc 44'54 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.007940 3 0.000021
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.007955 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000031 1 0.000038
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.989067 2 0.000012
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.992289 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.003940 2 0.000028
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000019 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.001436 2 0.000054
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000411 2 0.000022
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000009 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.001043 2 0.000030
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000007 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000945 2 0.000021
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000007 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.003474 2 0.000021
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000018 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000007 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.003524 2 0.000026
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000015 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.9( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.009771 7 0.000027
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.9( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.9( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.3( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.012083 7 0.000027
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.3( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.3( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.003484 2 0.000022
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000013 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.f( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.012517 7 0.000028
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.f( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.f( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.003467 2 0.000019
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000011 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000892 2 0.000022
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000008 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.002685 2 0.000021
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.001274 2 0.000021
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000016 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000010 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.004380 2 0.000025
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.10( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.013833 7 0.000053
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.001537 2 0.000021
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.10( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.10( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000009 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000011 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.003133 2 0.000022
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.002965 2 0.000021
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000009 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000014 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.1f( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.016736 7 0.000036
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.1f( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.1f( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.1d( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.018312 7 0.000038
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.1d( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.1d( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.019386 7 0.000078
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.6( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.019291 7 0.000029
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.6( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.6( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.019997 7 0.000031
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.b( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.019966 7 0.000025
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.b( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.021926 7 0.000027
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.b( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.17( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.022085 7 0.000050
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.17( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.14( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.021843 7 0.000039
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.17( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.14( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.14( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.016919 7 0.000030
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.18( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.018776 7 0.000063
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.18( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.18( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.19( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.017224 7 0.000025
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.19( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.19( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.018231 7 0.000023
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.1( v 44'2 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.021173 7 0.000049
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.1( v 44'2 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.1( v 44'2 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.019571 7 0.000046
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.020900 7 0.000023
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.f( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.020963 7 0.000031
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.f( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.e( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.020472 7 0.000026
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.f( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.e( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.e( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.e( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.020801 7 0.000030
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.e( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.e( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.d( v 37'39 lc 33'10 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.1( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.5( v 37'39 lc 33'6 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.7( v 37'39 lc 33'15 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.013428 4 0.000052
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.d( v 37'39 lc 33'10 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.013129 5 0.000055
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.d( v 37'39 lc 33'10 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.012971 4 0.000031
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/43 les/c/f=51/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.012922 5 0.000037
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/43 les/c/f=51/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.1( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/43 les/c/f=51/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.012587 4 0.000029
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.1( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/43 les/c/f=51/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.1( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/43 les/c/f=51/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.1( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/43 les/c/f=51/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.5( v 37'39 lc 33'6 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/43 les/c/f=51/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.012504 5 0.000046
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.5( v 37'39 lc 33'6 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/43 les/c/f=51/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.012440 4 0.000026
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.7( v 37'39 lc 33'15 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.012371 5 0.000032
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.7( v 37'39 lc 33'15 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.012277 4 0.000024
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.012138 4 0.000024
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.011984 5 0.000032
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.011926 4 0.000023
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.011465 4 0.000127
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.011443 4 0.000233
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010735 4 0.000025
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010607 4 0.000031
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.f( v 37'39 lc 33'1 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.f( v 37'39 lc 33'1 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/Activating 0.013331 5 0.000535
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.f( v 37'39 lc 33'1 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.6( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.021651 7 0.000055
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.6( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.6( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.016950 7 0.000037
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.9( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.022247 7 0.000040
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.9( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.9( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.023824 7 0.000027
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.14( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.023931 7 0.000030
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.14( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.14( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.4( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.022025 7 0.000045
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.4( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.10( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.018772 7 0.000037
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.4( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.10( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.10( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.023761 7 0.000025
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.c( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.023290 7 0.000072
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.c( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.c( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.14( v 49'65 lc 44'54 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.14( v 49'65 lc 44'54 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.011113 4 0.000097
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.14( v 49'65 lc 44'54 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.011120 4 0.000236
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.1f( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.005229 1 0.000027
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.1f( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.1f] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.1f( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.000036 1 0.000025
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.1f( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.d( v 37'39 lc 33'10 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.001869 1 0.000015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.d( v 37'39 lc 33'10 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.d( v 37'39 lc 33'10 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000003 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.d( v 37'39 lc 33'10 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.023835 7 0.000030
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.1f( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.023995 7 0.000030
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.1f( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.1f( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.1b( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.024721 7 0.000028
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.1b( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.1b( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.b( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.026035 7 0.000038
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.b( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.b( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.2( v 44'2 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.026999 7 0.000027
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.2( v 44'2 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.2( v 44'2 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.023727 7 0.000043
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.027142 7 0.000027
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.024054 7 0.000042
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.12( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.024297 7 0.000044
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.12( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.12( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.1e( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.024687 7 0.000376
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.18( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.025576 7 0.000027
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.18( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.1e( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.022901 7 0.000032
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.1e( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.11( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.024830 7 0.000028
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.023727 7 0.000023
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.11( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.11( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.1c( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.025459 7 0.000031
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.1b( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.025610 7 0.000028
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.1c( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.1b( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.1c( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.1b( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.4( v 33'4 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.025872 7 0.000023
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.026192 7 0.000023
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.4( v 33'4 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.4( v 33'4 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.025981 7 0.000021
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.025236 7 0.000071
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.024145 7 0.000023
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.d( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.027386 7 0.002216
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.d( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.d( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.8( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.026869 7 0.000025
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.8( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.026438 7 0.001004
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.8( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.027163 7 0.000023
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.d( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.027269 7 0.000024
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.d( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.d( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.2( v 33'4 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.027776 7 0.000027
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.2( v 33'4 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.15( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.028366 7 0.000051
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.2( v 33'4 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.15( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.15( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.028519 7 0.000033
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.1a( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.023550 7 0.000082
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.1a( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.1a( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.18( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.043430 7 0.000026
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.15( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.039103 7 0.000026
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.15( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.15( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.9( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.034126 2 0.000025
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.9( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.034138 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.9( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.9( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.3( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.096218 2 0.000012
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.3( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.096242 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.3( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[11.3( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.272194 1 0.000028
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000011 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/43 les/c/f=51/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.274110 1 0.000008
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/43 les/c/f=51/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/43 les/c/f=51/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000006 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/43 les/c/f=51/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.f( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 luod=0'0 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.474362 2 0.000017
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.f( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 luod=0'0 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.474384 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.f( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 luod=0'0 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.f( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 luod=0'0 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/43 les/c/f=51/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.193059 1 0.000054
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/43 les/c/f=51/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/43 les/c/f=51/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/43 les/c/f=51/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.5( v 37'39 lc 33'6 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/43 les/c/f=51/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.467232 1 0.000015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.5( v 37'39 lc 33'6 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/43 les/c/f=51/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.5( v 37'39 lc 33'6 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/43 les/c/f=51/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000015 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.5( v 37'39 lc 33'6 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/43 les/c/f=51/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.10( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 luod=0'0 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.550578 2 0.000016
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.10( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 luod=0'0 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.550609 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.10( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 luod=0'0 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[8.10( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 luod=0'0 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/43 les/c/f=51/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.139672 1 0.000103
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/43 les/c/f=51/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/43 les/c/f=51/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/43 les/c/f=51/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.7( v 37'39 lc 33'15 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.606986 1 0.000010
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.7( v 37'39 lc 33'15 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.7( v 37'39 lc 33'15 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000005 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.7( v 37'39 lc 33'15 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.088690 1 0.000062
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.695685 1 0.000008
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000005 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.096254 1 0.000039
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.f( v 37'39 lc 33'1 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.791909 1 0.000011
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000012 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.f( v 37'39 lc 33'1 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.f( v 37'39 lc 33'1 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000014 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.f( v 37'39 lc 33'1 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/Recovering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 51 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:31.502007+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 51 handle_osd_map epochs [52,52], i have 51, src has [1,52]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.995727 3 0.000049
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.999735 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.995083 3 0.000026
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.999511 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.996024 3 0.000024
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.999538 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/Activating
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.996006 3 0.000041
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.999594 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/Activating
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.995989 3 0.000030
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.999492 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.995979 3 0.000040
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.999705 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 69640192 unmapped: 1687552 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.996155 3 0.000026
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.999324 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Activating
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Activating
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.997076 3 0.000038
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.000085 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.997804 3 0.000045
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.000541 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Activating
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.998961 3 0.000027
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.000434 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.999105 3 0.000023
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.000189 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.999120 3 0.000023
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.000097 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.998532 3 0.000030
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.000111 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.998727 3 0.000041
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.000070 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.998936 3 0.000025
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.999869 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=3}}] enter Started/Primary/Active/Activating
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.999447 3 0.000122
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.000591 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Activating
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 52 handle_osd_map epochs [52,52], i have 52, src has [1,52]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 52 handle_osd_map epochs [52,52], i have 52, src has [1,52]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.021840 5 0.000243
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/Activating 0.024074 5 0.000213
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.026890 5 0.000265
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.023440 5 0.000186
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/Activating 0.025807 5 0.000933
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/Activating 0.027159 5 0.000367
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] exit Started/Primary/Active/Activating 0.024026 5 0.000331
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.024504 5 0.000207
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.027013 5 0.000200
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/Activating 0.024165 5 0.001344
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.025761 5 0.000252
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.023488 5 0.000889
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.027987 5 0.000418
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.024495 5 0.000196
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/Activating 0.026265 5 0.001818
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.222196 4 0.000088
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[10.14( v 51'66 (0'0,51'66] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 49'65 mlcod 49'65 active+recovery_wait mbc={255={}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 1.013304 5 0.000021
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[10.14( v 51'66 (0'0,51'66] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 49'65 mlcod 49'65 active+recovery_wait mbc={255={}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[10.14( v 51'66 (0'0,51'66] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 49'65 mlcod 49'65 active+recovery_wait mbc={255={}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000013 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/Activating 0.027918 5 0.000821
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[10.14( v 51'66 (0'0,51'66] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 49'65 mlcod 49'65 active+recovery_wait mbc={255={}}] enter Started/Primary/Active/Recovering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/45/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.006289 1 0.000047
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[10.14( v 51'66 (0'0,51'66] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 49'65 mlcod 49'65 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.003475 1 0.000056
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[10.14( v 51'66 (0'0,51'66] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 49'65 mlcod 49'65 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[10.14( v 51'66 (0'0,51'66] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 49'65 mlcod 49'65 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[10.14( v 51'66 (0'0,51'66] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 49'65 mlcod 49'65 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000496 1 0.000027
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.054868 2 0.000104
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.060361 1 0.000041
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000301 1 0.000056
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Recovering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.052426 2 0.000044
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.113098 1 0.000043
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000415 1 0.000027
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.038202 2 0.000043
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.151651 1 0.000074
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000406 1 0.000067
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.031302 2 0.000046
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.182967 1 0.000072
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000309 1 0.000034
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Recovering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.052311 2 0.000040
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.235848 1 0.000036
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000381 1 0.000072
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=3}}] enter Started/Primary/Active/Recovering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.023974 2 0.000062
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.260274 1 0.000077
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000379 1 0.000079
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/Recovering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.059493 2 0.000075
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.320207 1 0.000034
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000408 1 0.000080
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.039028 2 0.000054
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.359721 1 0.000035
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000510 1 0.000141
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.038188 2 0.000038
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.398497 1 0.000016
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000362 1 0.000148
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Recovering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.052394 2 0.000040
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.451400 1 0.000020
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000425 1 0.000040
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.038296 2 0.000040
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.490222 1 0.000011
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000305 1 0.000086
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.017198 2 0.000039
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.507809 1 0.000059
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000501 1 0.000056
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.031137 2 0.000053
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.539431 1 0.000278
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000493 1 0.000023
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 52 heartbeat osd_stat(store_statfs(0x4fdc98000/0x0/0x4ffc00000, data 0xb1588/0x124000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e3f9c6), peers [0,2] op hist [])
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.038229 2 0.000023
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.578099 1 0.000023
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.001031 1 0.000033
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Recovering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.051828 2 0.000026
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.630959 1 0.000031
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000511 1 0.000028
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/Recovering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.059383 2 0.000033
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.1d( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 1.706884 4 0.000020
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.1d( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.1d] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 1.706937 4 0.000014
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.9] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.6( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 1.706990 4 0.000025
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.6( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.6] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 1.707030 4 0.000047
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.4] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.b( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 1.707055 4 0.000012
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.b( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.b] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 1.707098 4 0.000010
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1b] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.17( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 1.707140 4 0.000015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.17( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.17] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.14( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 1.707193 4 0.000018
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.14( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.14] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 1.707228 4 0.000010
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.1a] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.18( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 1.707273 4 0.000013
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.18( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.18] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.19( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 1.707315 4 0.000010
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.19( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.19] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 1.707372 4 0.000009
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.13] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.1( v 44'2 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 1.707414 4 0.000009
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.1( v 44'2 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.1] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 1.707458 4 0.000011
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.f] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 1.707512 4 0.000011
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.3] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.f( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 1.707564 4 0.000010
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.f( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.f] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.e( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 1.707612 4 0.000010
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.e( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.e] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.e( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 1.707652 4 0.000010
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.e( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.e] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.6( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 1.705378 4 0.000023
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.6( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.6] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 1.705414 4 0.000019
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.6] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.9( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 1.705456 4 0.000011
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.9( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.9] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.14( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 1.705509 4 0.000011
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.14( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.14] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 1.705582 4 0.000015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.18] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.10( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 1.705609 4 0.000011
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.10( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.10] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 1.705637 4 0.000010
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1f] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.c( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 1.705660 4 0.000010
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.c( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.c] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.4( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 1.705715 4 0.000063
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.4( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.4] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.1f( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 1.704936 5 0.000013
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.1f( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.1f] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 1.702761 4 0.000020
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.1c] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.1f( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 1.702736 4 0.000083
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.1f( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.1f] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.1b( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 1.702782 4 0.000010
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.1b( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.1b] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.b( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 1.702828 4 0.000011
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.b( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.b] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.2( v 44'2 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 1.702756 4 0.000169
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.2( v 44'2 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.2] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 1.702582 4 0.000211
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.e] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 1.702648 4 0.000229
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.11] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 1.702371 4 0.000280
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.12] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.12( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 1.702380 4 0.000151
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.12( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.12] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 1.702125 4 0.000029
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1c] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.1e( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 1.702155 4 0.000166
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.1e( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.1e] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.11( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 1.702179 4 0.001027
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.11( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.11] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 1.702206 4 0.000012
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.15] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.1c( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 1.702233 4 0.000011
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.1c( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.1c] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.1b( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 1.702284 4 0.000021
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.1b( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.1b] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 1.702323 4 0.000010
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.8] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.4( v 33'4 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 1.702386 4 0.000012
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.4( v 33'4 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.4] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 1.702425 4 0.000010
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.a] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 1.702468 4 0.000010
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.11] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 1.702513 4 0.000009
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.5] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.d( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 1.702594 4 0.000230
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.d( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.d] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.8( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 1.702645 4 0.000010
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.8( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.8] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 1.702691 4 0.000010
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.2] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 1.702721 4 0.000010
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.d( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 1.702752 4 0.000011
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.d( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.d] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.2( v 33'4 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 1.702783 4 0.000010
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.2( v 33'4 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.2] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.15( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 1.702849 4 0.000011
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.15( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.15] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 1.702879 4 0.000009
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1a] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.1a( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 1.702929 4 0.000010
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.1a( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.1a] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.18( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 1.703186 4 0.000310
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.18( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.18] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 1.687280 4 0.000032
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.c] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.15( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 1.687318 4 0.000025
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.15( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.15] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.9( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 1.683535 4 0.000058
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.9( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.9] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.3( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 1.621426 4 0.000069
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.3( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.3] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.f( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 luod=0'0 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 1.243184 4 0.000047
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.f( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 luod=0'0 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.f] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.10( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 luod=0'0 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 1.166833 4 0.000143
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.10( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 luod=0'0 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.10] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.1d( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.007566 1 0.000083
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.1d( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.714480 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.1d( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.732816 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.1d] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.9( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.014862 1 0.000041
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.9( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.721825 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.9( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.741231 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.9] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.6( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.022526 1 0.000038
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.6( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.729549 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.6( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.748869 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.6] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.4( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.029520 1 0.000024
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.4( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.736573 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.4( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.756586 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.4] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.b( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.036932 1 0.000017
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.b( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.744008 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.b( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.763993 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.b] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.1b( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.044348 1 0.000025
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.1b( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.751472 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.1b( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.773414 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1b] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.17( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.051609 1 0.000051
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.17( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.758774 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.17( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.780877 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.17] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.14( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.058949 1 0.000025
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.14( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.766171 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.14( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.788036 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.14] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.1a( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.066336 1 0.000023
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.1a( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.773588 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.1a( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.790524 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.1a] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.18( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.073696 1 0.000067
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.18( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.780995 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.18( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.799788 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.18] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.19( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.081090 1 0.000059
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.19( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.788429 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.19( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.805670 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.19] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:32.502110+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.13( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.088411 1 0.000032
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.13( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.795813 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.13( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.814060 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.13] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.1( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.095742 1 0.000072
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.1( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.803181 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.1( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.824371 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.1] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.f( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.103114 1 0.000080
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.f( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.810594 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.f( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.830181 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.f] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.3( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.110445 1 0.000055
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.3( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.817981 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.3( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.838898 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.3] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.f( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.117890 1 0.000029
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.f( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.825489 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.f( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.846470 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.f] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.e( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.125269 1 0.000071
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.e( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.832914 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.e( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.853404 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.e] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.e( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.132740 1 0.000070
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.e( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.840419 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.e( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.861237 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.e] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.6( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.139870 1 0.000026
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.6( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.845276 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.6( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.866954 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.6] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.6( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.147245 1 0.000070
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.6( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.852685 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.6( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.869655 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.6] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.9( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.154535 1 0.000093
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.9( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.860013 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.9( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.882277 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.9] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.14( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.161935 1 0.000076
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.14( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.867480 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.14( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.891440 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.14] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.18( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.169273 1 0.000027
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.18( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.874879 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.18( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.898719 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.18] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.10( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.176689 1 0.000023
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.10( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.882325 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.10( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.901115 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.10] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.1f( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.184034 1 0.000020
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.1f( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.889692 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.1f( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.913469 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1f] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.c( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.191452 1 0.000020
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.c( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.897136 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.c( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.920443 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.c] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.4( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.198783 1 0.000023
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.4( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.904522 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.4( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.926565 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.4] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.1f( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.205972 1 0.000079
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.1f( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.916223 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.1f( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.932995 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.1f] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.1c( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 DELETING pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.213307 1 0.000080
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.1c( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.916089 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.1c( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.939947 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.1c] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.1f( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.220646 1 0.000054
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.1f( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.923406 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.1f( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.947488 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.1f] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.1b( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 DELETING pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.228013 1 0.000038
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.1b( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.930838 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.1b( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.955590 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.1b] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.b( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.235366 1 0.000024
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.b( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.938219 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.b( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.964271 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.b] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.2( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.242733 1 0.000021
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.2( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.945513 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.2( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.972624 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.2] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.e( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 DELETING pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.250093 1 0.000021
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.e( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.952765 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.e( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.979990 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.e] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.11( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 DELETING pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.257637 1 0.000024
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.11( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.960320 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.11( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.984133 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.11] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.12( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 DELETING pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.264869 1 0.000021
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.12( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.967388 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[8.12( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.991548 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.12] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.12( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.272234 1 0.000022
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.12( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.974721 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[11.12( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.999054 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.12] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.1c( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 DELETING pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.279616 1 0.000020
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.1c( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.981775 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 52 pg[7.1c( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 3.004721 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 52 handle_osd_map epochs [53,53], i have 52, src has [1,53]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1c] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 69746688 unmapped: 1581056 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.436305 1 0.000056
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.003942 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.003703 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.003719 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.023622513s) [0] async=[0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 113.238182068s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.023562431s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.238182068s@ mbc={}] exit Reset 0.000096 1 0.000135
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.023562431s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.238182068s@ mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.023562431s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.238182068s@ mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.023562431s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.238182068s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.023562431s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.238182068s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.023562431s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.238182068s@ mbc={}] enter Started/Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.825121 1 0.000098
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.003923 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.003447 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.003463 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.11] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022793770s) [0] async=[0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 113.237480164s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.11] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022745132s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.237480164s@ mbc={}] exit Reset 0.000077 1 0.000099
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022745132s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.237480164s@ mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022745132s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.237480164s@ mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022745132s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.237480164s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022745132s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.237480164s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022745132s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.237480164s@ mbc={}] enter Started/Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.284918 1 0.000074
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.003839 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.003397 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.003410 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.3] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.023048401s) [0] async=[0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 113.237869263s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.3] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.023011208s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.237869263s@ mbc={}] exit Reset 0.000052 1 0.000068
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.023011208s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.237869263s@ mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.023011208s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.237869263s@ mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.023011208s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.237869263s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.023011208s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.237869263s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.023011208s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.237869263s@ mbc={}] enter Started/Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.656077 1 0.000109
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.003632 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.003237 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.003253 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.d] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022819519s) [0] async=[0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 113.237731934s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.d] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022778511s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.237731934s@ mbc={}] exit Reset 0.000054 1 0.000071
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022778511s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.237731934s@ mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022778511s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.237731934s@ mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022778511s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.237731934s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022778511s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.237731934s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022778511s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.237731934s@ mbc={}] enter Started/Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.345062 1 0.000075
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.003390 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.003105 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.003118 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.023331642s) [0] async=[0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 113.238349915s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.577920 1 0.000059
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.003544 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.003045 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.003057 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.9] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.023300171s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.238349915s@ mbc={}] exit Reset 0.000047 1 0.000061
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022944450s) [0] async=[0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 113.237998962s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.023300171s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.238349915s@ mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.023300171s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.238349915s@ mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.023300171s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.238349915s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.023300171s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.238349915s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.023300171s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.238349915s@ mbc={}] enter Started/Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.9] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022917747s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.237998962s@ mbc={}] exit Reset 0.000041 1 0.000055
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022917747s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.237998962s@ mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022917747s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.237998962s@ mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022917747s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.237998962s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022917747s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.237998962s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022917747s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.237998962s@ mbc={}] enter Started/Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.741180 1 0.000097
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.003526 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.002865 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.002878 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022416115s) [0] async=[0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 113.237792969s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.486574 1 0.000055
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.002654 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.002749 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022385597s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.237792969s@ mbc={}] exit Reset 0.000044 1 0.000064
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.002762 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022385597s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.237792969s@ mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022385597s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.237792969s@ mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022385597s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.237792969s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022385597s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.237792969s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022385597s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.237792969s@ mbc={}] enter Started/Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022971153s) [0] async=[0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 113.238403320s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022942543s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.238403320s@ mbc={}] exit Reset 0.000045 1 0.000065
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022942543s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.238403320s@ mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022942543s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.238403320s@ mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022942543s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.238403320s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022942543s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.238403320s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022942543s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.238403320s@ mbc={}] enter Started/Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.525560 1 0.000044
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.001799 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.002243 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.002256 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022498131s) [0] async=[0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 113.238121033s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022468567s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.238121033s@ mbc={}] exit Reset 0.000042 1 0.000059
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022468567s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.238121033s@ mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022468567s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.238121033s@ mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022468567s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.238121033s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022468567s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.238121033s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022468567s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.238121033s@ mbc={}] enter Started/Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.617377 1 0.000083
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.001697 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.001914 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.001927 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022225380s) [0] async=[0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 113.237945557s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.398738 1 0.000081
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.001751 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.001864 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.001876 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.865051 1 0.000108
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.002381 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.002932 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.002945 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1d] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022089005s) [0] async=[0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 113.237884521s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1d] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.5] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022052765s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.237884521s@ mbc={}] exit Reset 0.000049 1 0.000071
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.021659851s) [0] async=[0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 113.237464905s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022052765s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.237884521s@ mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022052765s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.237884521s@ mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022052765s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.237884521s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022052765s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.237884521s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022052765s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.237884521s@ mbc={}] enter Started/Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.5] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.918060 1 0.000122
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.001707 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.021580696s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.237464905s@ mbc={}] exit Reset 0.000093 1 0.000109
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.001838 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.021580696s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.237464905s@ mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.001852 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.021580696s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.237464905s@ mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.021580696s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.237464905s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.021580696s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.237464905s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.021580696s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.237464905s@ mbc={}] enter Started/Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.020052910s) [0] async=[0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 113.235961914s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.020028114s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.235961914s@ mbc={}] exit Reset 0.000037 1 0.000053
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.020028114s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.235961914s@ mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.020028114s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.235961914s@ mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.020028114s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.235961914s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.020028114s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.235961914s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.020028114s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.235961914s@ mbc={}] enter Started/Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.794676 1 0.000126
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.001628 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.001710 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.469550 1 0.000062
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.001725 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.001011 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.001614 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.001627 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1b] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.b] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022456169s) [0] async=[0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 113.238464355s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.021612167s) [0] async=[0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 113.237617493s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1b] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022437096s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.238464355s@ mbc={}] exit Reset 0.000030 1 0.000045
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022437096s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.238464355s@ mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022437096s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.238464355s@ mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022437096s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.238464355s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022437096s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.238464355s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.022437096s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.238464355s@ mbc={}] enter Started/Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.b] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.021568298s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.237617493s@ mbc={}] exit Reset 0.000065 1 0.000088
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.021568298s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.237617493s@ mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.021568298s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.237617493s@ mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.021568298s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.237617493s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.021568298s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.237617493s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.021568298s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.237617493s@ mbc={}] enter Started/Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.717255 1 0.000112
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.001623 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.001512 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.001525 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.021500587s) [0] async=[0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 113.237632751s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.021479607s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.237632751s@ mbc={}] exit Reset 0.000032 1 0.000048
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.021479607s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.237632751s@ mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.021479607s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.237632751s@ mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.021479607s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.237632751s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.021479607s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.237632751s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.021479607s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.237632751s@ mbc={}] enter Started/Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.021776199s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.237945557s@ mbc={}] exit Reset 0.000466 1 0.000480
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.021776199s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.237945557s@ mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.021776199s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.237945557s@ mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.021776199s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.237945557s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.021776199s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.237945557s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.021776199s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.237945557s@ mbc={}] enter Started/Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[11.1e( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.287025 4 0.000022
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[11.1e( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.989287 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[11.1e( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 3.014391 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.1e] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.9] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.d] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.d] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.3] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.9] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1d] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1d] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.3] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1b] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1b] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.5] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.5] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.b] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.11] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.b] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 53 handle_osd_map epochs [53,53], i have 53, src has [1,53]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.11] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[11.11( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.294393 4 0.000019
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[11.11( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.996606 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[11.11( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 3.021465 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.11] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[7.15( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 DELETING pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.301793 4 0.000060
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[7.15( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 2.004032 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[7.15( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 3.027787 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.15] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[11.1c( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.309101 4 0.000073
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[11.1c( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 2.011359 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[11.1c( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 3.036835 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.1c] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[11.1b( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.316458 4 0.000070
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[11.1b( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 2.018765 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[11.1b( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 3.044393 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.1b] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[7.8( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 DELETING pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.323819 4 0.000053
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[7.8( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 2.026175 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[7.8( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 3.052394 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.8] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[8.4( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 DELETING pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.331128 4 0.000033
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[8.4( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 2.033564 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[8.4( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 3.059464 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.4] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[7.a( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 DELETING pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.338498 4 0.000065
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[7.a( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 2.040954 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[7.a( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 3.066960 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.a] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[7.11( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 DELETING pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.345895 4 0.000106
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[7.11( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 2.048393 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[7.11( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 3.073662 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.11] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[7.5( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 DELETING pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.353186 4 0.000107
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[7.5( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 2.055750 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[7.5( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 3.079912 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.5] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[8.d( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 DELETING pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.360549 4 0.000053
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[8.d( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 2.063177 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[8.d( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 3.090591 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.d] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[11.8( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.367930 4 0.000033
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[11.8( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 2.070603 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[11.8( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 3.097490 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.8] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[7.2( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 DELETING pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.375226 4 0.000027
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[7.2( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 2.077940 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[7.2( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 3.104395 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.2] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[7.1( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 DELETING pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.382640 4 0.000021
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[7.1( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 2.085386 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[7.1( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 3.112565 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[11.d( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.390014 4 0.000067
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[11.d( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 2.092793 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[11.d( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 3.120080 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.d] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[8.2( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 DELETING pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.397387 4 0.000062
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[8.2( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 2.100201 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[8.2( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 3.127995 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.2] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[8.15( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 DELETING pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.404743 4 0.000054
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[8.15( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 2.107654 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[8.15( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 3.136044 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.15] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[7.1a( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 DELETING pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.412425 4 0.000050
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[7.1a( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 2.115337 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[7.1a( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 3.143896 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1a] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[11.1a( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.419483 4 0.000027
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[11.1a( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 2.122447 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[11.1a( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 3.146083 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.1a] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[11.18( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.426779 4 0.000025
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[11.18( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 2.130000 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[11.18( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 3.155673 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.18] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[7.c( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 DELETING pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.434169 4 0.000021
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[7.c( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 2.121482 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[7.c( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 3.164956 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.c] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[11.15( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.441533 4 0.000056
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[11.15( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 2.128888 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[11.15( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 3.168029 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.15] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[11.9( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 DELETING pi=[47,50)/1 luod=0'0 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.478557 5 0.000100
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[11.9( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete 2.162131 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[11.9( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 3.206080 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.9] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[11.3( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 DELETING pi=[47,50)/1 luod=0'0 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.485926 5 0.000097
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[11.3( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete 2.107390 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[11.3( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 3.215750 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.3] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[8.f( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[45,50)/1 luod=0'0 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.508096 5 0.000100
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[8.f( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 luod=0'0 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete 1.751320 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[8.f( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 luod=0'0 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 3.238259 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.f] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[8.10( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[45,50)/1 luod=0'0 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.515414 5 0.000066
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[8.10( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 luod=0'0 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete 1.682308 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 53 pg[8.10( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 luod=0'0 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 3.246802 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.10] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:33.502243+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 57 sent 55 num 2 unsent 2 sending 2
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:41:03.424279+0000 osd.1 (osd.1) 56 : cluster [DBG] 2.9 scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:41:03.438404+0000 osd.1 (osd.1) 57 : cluster [DBG] 2.9 scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 53 handle_osd_map epochs [53,54], i have 53, src has [1,54]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 69746688 unmapped: 1581056 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 57) v1
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:41:03.424279+0000 osd.1 (osd.1) 56 : cluster [DBG] 2.9 scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:41:03.438404+0000 osd.1 (osd.1) 57 : cluster [DBG] 2.9 scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.009955 7 0.000071
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000047 1 0.000045
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.11] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.013813 7 0.000087
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.013053 7 0.000070
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.014086 7 0.000058
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.013684 7 0.000063
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.012900 7 0.000332
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.012603 7 0.000068
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.013862 7 0.000065
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.013419 7 0.000055
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.012678 7 0.002183
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.013762 7 0.000062
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.012774 7 0.000052
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000159 1 0.000031
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.3] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.013410 7 0.000062
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000296 1 0.000017
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000296 1 0.000067
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1d] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000392 1 0.000075
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000419 1 0.000046
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000522 1 0.000012
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.d] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000575 1 0.000015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000622 1 0.000018
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000657 1 0.000021
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.9] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000798 1 0.000196
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000743 1 0.000095
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1b] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000679 1 0.000142
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.014141 7 0.000064
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.014041 7 0.000180
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.014220 7 0.000075
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000034 1 0.000031
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000135 1 0.000014
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.5] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000159 1 0.000083
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.b] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.11( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.071175 2 0.000201
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.11( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.071260 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.11( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.081258 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.11] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.3( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.207854 2 0.000235
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.3( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.208095 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.3( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.221941 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.3] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.19( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.259556 2 0.000087
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.19( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.259875 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.19( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.272953 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 54 heartbeat osd_stat(store_statfs(0x4fcaf7000/0x0/0x4ffc00000, data 0xb4847/0x126000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.1d( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.296398 2 0.000118
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.1d( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.296728 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.1d( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.309690 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1d] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.f( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.348354 2 0.000088
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.f( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.348772 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.f( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.362482 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.1f( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.385218 2 0.000072
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.1f( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.385671 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.1f( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.398313 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.d( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.444448 2 0.000122
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.d( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.445005 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.d( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.458887 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.d] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.1( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.496178 2 0.000075
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.1( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.496782 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.1( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.510225 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.17( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.518321 2 0.000078
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.17( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.518968 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.17( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.531669 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.9( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.555532 2 0.000069
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.9( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.556218 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.9( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.569999 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.9] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.15( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.584916 2 0.000084
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.15( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.585774 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.15( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.599916 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.1b( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.599585 2 0.000056
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.1b( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.600376 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.1b( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.613227 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1b] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.7( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.636767 2 0.000058
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.7( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.637484 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.7( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.651039 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.13( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.673261 2 0.000146
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.13( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.673359 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.13( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.687528 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.5( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.725163 2 0.000091
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.5( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.725332 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.5( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.739574 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.5] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.b( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.754512 2 0.000070
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.b( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.754736 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 54 pg[9.b( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.768816 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.b] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:34.502409+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:05 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1515520 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 624851 data_alloc: 218103808 data_used: 122880
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:35.502531+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 1499136 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:36.502623+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 1499136 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 54 heartbeat osd_stat(store_statfs(0x4fcb07000/0x0/0x4ffc00000, data 0xb59b6/0x115000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:37.502752+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 69844992 unmapped: 1482752 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 54 handle_osd_map epochs [55,55], i have 54, src has [1,55]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.a(unlocked)] enter Initial
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.a( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=0 pi=[43,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000055 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.a( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=0 pi=[43,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.a( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000011 1 0.000026
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.a( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.a( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.a( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.a( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.a( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.a( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.a( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.a( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000089 1 0.000038
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.a( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.e(unlocked)] enter Initial
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.e( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=0 pi=[43,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000017 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.e( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=0 pi=[43,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.e( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000009
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.e( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.e( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.e( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.e( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.e( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.e( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.e( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.e( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000059 1 0.000032
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.e( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.6(unlocked)] enter Initial
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.6( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=0 pi=[43,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000052 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.6( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=0 pi=[43,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.6( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000017 1 0.000036
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.6( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.6( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.6( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.6( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000049 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.6( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.6( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.6( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.6( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000093 1 0.000141
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.2(unlocked)] enter Initial
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.2( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=0 pi=[43,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000054 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.2( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=0 pi=[43,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.2( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000016 1 0.000031
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.2( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.2( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.2( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.2( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000049 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.2( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.2( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.2( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.2( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000064 1 0.000133
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.2( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.6( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:05 compute-0 ceph-osd[89074]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.a( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=37'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001874 2 0.000050
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.a( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=37'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.a( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=37'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000014 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.a( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=37'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:05 compute-0 ceph-osd[89074]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.e( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.001696 2 0.000027
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.e( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.e( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.e( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:05 compute-0 ceph-osd[89074]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.6( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.000626 2 0.000877
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.6( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.6( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000029 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.6( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:05 compute-0 ceph-osd[89074]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.2( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=37'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001316 2 0.000051
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.2( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=37'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.2( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=37'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 55 pg[6.2( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=37'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:38.502856+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 1556480 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 55 handle_osd_map epochs [55,56], i have 55, src has [1,56]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.017251968s of 10.251970291s, submitted: 570
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 56 pg[6.2( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=37'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.992122 2 0.000041
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 56 pg[6.6( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.992396 2 0.000141
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 56 pg[6.6( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 0.994095 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 56 pg[6.6( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=37'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 56 pg[6.6( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=55/56 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=37'39 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 56 pg[6.e( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.992677 2 0.000107
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 56 pg[6.e( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 0.994601 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 56 pg[6.e( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=37'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 56 pg[6.e( v 37'39 lc 33'14 (0'0,37'39] local-lis/les=55/56 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 56 pg[6.a( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=37'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.992854 2 0.000073
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 56 pg[6.a( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=37'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.995059 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 56 pg[6.a( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=37'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 56 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=55/56 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 56 pg[6.2( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=37'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.994304 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 56 pg[6.2( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=37'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 56 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=55/56 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 56 pg[6.6( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=55/56 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=37'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 56 pg[6.6( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=55/56 n=2 ec=43/21 lis/c=55/43 les/c/f=56/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=37'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.003010 3 0.000348
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 56 pg[6.6( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=55/56 n=2 ec=43/21 lis/c=55/43 les/c/f=56/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=37'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 56 pg[6.6( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=55/56 n=2 ec=43/21 lis/c=55/43 les/c/f=56/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=37'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000062 1 0.000046
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 56 pg[6.6( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=55/56 n=2 ec=43/21 lis/c=55/43 les/c/f=56/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=37'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 56 pg[6.6( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=55/56 n=2 ec=43/21 lis/c=55/43 les/c/f=56/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=37'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000015 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 56 pg[6.6( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=55/56 n=2 ec=43/21 lis/c=55/43 les/c/f=56/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=37'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 56 pg[6.e( v 37'39 lc 33'14 (0'0,37'39] local-lis/les=55/56 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 56 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=55/56 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 56 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=55/56 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 56 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=55/56 n=1 ec=43/21 lis/c=55/43 les/c/f=56/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003301 3 0.000276
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 56 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=55/56 n=1 ec=43/21 lis/c=55/43 les/c/f=56/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 56 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=55/56 n=1 ec=43/21 lis/c=55/43 les/c/f=56/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 56 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=55/56 n=1 ec=43/21 lis/c=55/43 les/c/f=56/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 56 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=55/56 n=2 ec=43/21 lis/c=55/43 les/c/f=56/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003069 3 0.000936
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 56 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=55/56 n=2 ec=43/21 lis/c=55/43 les/c/f=56/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 56 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=55/56 n=2 ec=43/21 lis/c=55/43 les/c/f=56/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 56 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=55/56 n=2 ec=43/21 lis/c=55/43 les/c/f=56/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 56 pg[6.e( v 37'39 lc 33'14 (0'0,37'39] local-lis/les=55/56 n=1 ec=43/21 lis/c=55/43 les/c/f=56/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.003535 3 0.000315
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 56 pg[6.e( v 37'39 lc 33'14 (0'0,37'39] local-lis/les=55/56 n=1 ec=43/21 lis/c=55/43 les/c/f=56/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 56 handle_osd_map epochs [56,56], i have 56, src has [1,56]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 56 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=55/56 n=2 ec=43/21 lis/c=55/43 les/c/f=56/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.008624 3 0.000080
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 56 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=55/56 n=2 ec=43/21 lis/c=55/43 les/c/f=56/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 56 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=55/56 n=2 ec=43/21 lis/c=55/43 les/c/f=56/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 56 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=55/56 n=2 ec=43/21 lis/c=55/43 les/c/f=56/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 56 pg[6.e( v 37'39 lc 33'14 (0'0,37'39] local-lis/les=55/56 n=1 ec=43/21 lis/c=55/43 les/c/f=56/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.008057 3 0.000046
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 56 pg[6.e( v 37'39 lc 33'14 (0'0,37'39] local-lis/les=55/56 n=1 ec=43/21 lis/c=55/43 les/c/f=56/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 56 pg[6.e( v 37'39 lc 33'14 (0'0,37'39] local-lis/les=55/56 n=1 ec=43/21 lis/c=55/43 les/c/f=56/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000015 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 56 pg[6.e( v 37'39 lc 33'14 (0'0,37'39] local-lis/les=55/56 n=1 ec=43/21 lis/c=55/43 les/c/f=56/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 56 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=55/56 n=1 ec=43/21 lis/c=55/43 les/c/f=56/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.073934 1 0.000081
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 56 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=55/56 n=1 ec=43/21 lis/c=55/43 les/c/f=56/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 56 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=55/56 n=1 ec=43/21 lis/c=55/43 les/c/f=56/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000021 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 56 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=55/56 n=1 ec=43/21 lis/c=55/43 les/c/f=56/45/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:39.502983+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:05 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 69640192 unmapped: 1687552 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 639039 data_alloc: 218103808 data_used: 122880
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:40.503100+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 69640192 unmapped: 1687552 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:41.503228+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 69656576 unmapped: 1671168 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 56 handle_osd_map epochs [57,58], i have 56, src has [1,58]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 56 handle_osd_map epochs [57,58], i have 58, src has [1,58]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 58 pg[6.4(unlocked)] enter Initial
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 58 pg[6.4( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58) [1] r=0 lpr=0 pi=[43,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000047 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 58 pg[6.4( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58) [1] r=0 lpr=0 pi=[43,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 58 pg[6.4( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[43,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000017 1 0.000036
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 58 pg[6.4( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[43,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 58 pg[6.4( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[43,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 58 pg[6.4( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[43,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 58 pg[6.4( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[43,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000111 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 58 pg[6.4( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[43,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 58 pg[6.4( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[43,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 58 pg[6.4( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[43,58)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 58 pg[6.4( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[43,58)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000110 1 0.000203
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 58 pg[6.c(unlocked)] enter Initial
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 58 pg[6.c( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58) [1] r=0 lpr=0 pi=[43,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000047 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 58 pg[6.c( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58) [1] r=0 lpr=0 pi=[43,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 58 pg[6.c( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[43,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000016 1 0.000033
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 58 pg[6.c( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[43,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 58 pg[6.c( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[43,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 58 pg[6.c( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[43,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 58 pg[6.c( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[43,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000063 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 58 pg[6.c( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[43,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 58 pg[6.c( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[43,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 58 pg[6.c( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[43,58)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 58 pg[6.c( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[43,58)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000071 1 0.000167
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 58 pg[6.c( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[43,58)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 58 pg[6.4( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[43,58)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 57 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=50) [1] r=0 lpr=50 crt=37'39 mlcod 37'39 active+clean] exit Started/Primary/Active/Clean 10.565882 17 0.000062
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 57 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=50) [1] r=0 lpr=50 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active 11.046065 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 57 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=50) [1] r=0 lpr=50 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary 12.057100 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 57 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=50) [1] r=0 lpr=50 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started 12.057112 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 57 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=50) [1] r=0 lpr=50 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 57 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57 pruub=12.966836929s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 37'39 active pruub 120.224609375s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 58 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57 pruub=12.966792107s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 120.224609375s@ mbc={}] exit Reset 0.000061 2 0.000086
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 58 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57 pruub=12.966792107s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 120.224609375s@ mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 58 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57 pruub=12.966792107s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 120.224609375s@ mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 58 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57 pruub=12.966792107s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 120.224609375s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 58 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57 pruub=12.966792107s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 120.224609375s@ mbc={}] exit Start 0.000010 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 58 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57 pruub=12.966792107s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 120.224609375s@ mbc={}] enter Started/Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 57 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/52/0 sis=50) [1] r=0 lpr=50 crt=37'39 mlcod 37'39 active+clean] exit Started/Primary/Active/Clean 10.018645 14 0.000169
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 57 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/52/0 sis=50) [1] r=0 lpr=50 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active 11.046726 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 57 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/52/0 sis=50) [1] r=0 lpr=50 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary 12.057129 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 57 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/52/0 sis=50) [1] r=0 lpr=50 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started 12.057140 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 57 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/52/0 sis=50) [1] r=0 lpr=50 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 57 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/52/0 sis=57 pruub=12.966692924s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 37'39 active pruub 120.224693298s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 58 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/52/0 sis=57 pruub=12.966661453s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 120.224693298s@ mbc={}] exit Reset 0.000045 2 0.000435
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 58 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/52/0 sis=57 pruub=12.966661453s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 120.224693298s@ mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 58 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/52/0 sis=57 pruub=12.966661453s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 120.224693298s@ mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 58 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/52/0 sis=57 pruub=12.966661453s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 120.224693298s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 58 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/52/0 sis=57 pruub=12.966661453s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 120.224693298s@ mbc={}] exit Start 0.000008 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 58 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/52/0 sis=57 pruub=12.966661453s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 120.224693298s@ mbc={}] enter Started/Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 57 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=50) [1] r=0 lpr=50 crt=37'39 mlcod 37'39 active+clean] exit Started/Primary/Active/Clean 10.337666 17 0.000127
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 57 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=50) [1] r=0 lpr=50 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active 11.045935 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 57 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=50) [1] r=0 lpr=50 crt=37'39 mlcod 37'39 active+clean] exit Started/Primary/Active/Clean 10.241502 17 0.000132
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 57 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=50) [1] r=0 lpr=50 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active 11.045527 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 57 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=50) [1] r=0 lpr=50 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary 12.058292 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 57 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=50) [1] r=0 lpr=50 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started 12.058307 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 57 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=50) [1] r=0 lpr=50 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 57 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=50) [1] r=0 lpr=50 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary 12.057916 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 57 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=50) [1] r=0 lpr=50 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started 12.057983 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 57 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=50) [1] r=0 lpr=50 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 57 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57 pruub=12.966414452s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 37'39 active pruub 120.224662781s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 58 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57 pruub=12.966257095s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 120.224662781s@ mbc={}] exit Reset 0.000174 2 0.000191
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 58 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57 pruub=12.966257095s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 120.224662781s@ mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 58 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57 pruub=12.966257095s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 120.224662781s@ mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 58 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57 pruub=12.966257095s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 120.224662781s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 58 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57 pruub=12.966257095s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 120.224662781s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 58 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57 pruub=12.966257095s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 120.224662781s@ mbc={}] enter Started/Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 57 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57 pruub=12.966341019s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 37'39 active pruub 120.224655151s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 58 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57 pruub=12.966003418s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 120.224655151s@ mbc={}] exit Reset 0.000362 2 0.000599
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 58 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57 pruub=12.966003418s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 120.224655151s@ mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 58 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57 pruub=12.966003418s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 120.224655151s@ mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 58 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57 pruub=12.966003418s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 120.224655151s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 58 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57 pruub=12.966003418s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 120.224655151s@ mbc={}] exit Start 0.000044 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 58 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57 pruub=12.966003418s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 120.224655151s@ mbc={}] enter Started/Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 58 pg[6.4( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[43,58)/1 crt=37'39 mlcod 0'0 peering m=4 mbc={}] exit Started/Primary/Peering/GetLog 0.001493 2 0.000592
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 58 pg[6.4( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[43,58)/1 crt=37'39 mlcod 0'0 peering m=4 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 58 pg[6.4( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[43,58)/1 crt=37'39 mlcod 0'0 peering m=4 mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 58 pg[6.4( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[43,58)/1 crt=37'39 mlcod 0'0 peering m=4 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 58 pg[6.c( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[43,58)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.001921 2 0.000054
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 58 pg[6.c( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[43,58)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 58 pg[6.c( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[43,58)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000035 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 58 pg[6.c( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[43,58)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:42.503355+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 70729728 unmapped: 598016 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 58 handle_osd_map epochs [58,59], i have 58, src has [1,59]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.c( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[43,58)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.004497 2 0.000081
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.c( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[43,58)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 1.006586 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.c( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[43,58)/1 crt=37'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.c( v 37'39 lc 33'13 (0'0,37'39] local-lis/les=58/59 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[43,58)/1 crt=37'39 lcod 0'0 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.4( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[43,58)/1 crt=37'39 mlcod 0'0 peering m=4 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.005377 2 0.000155
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.4( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[43,58)/1 crt=37'39 mlcod 0'0 peering m=4 mbc={}] exit Started/Primary/Peering 1.007419 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.4( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[43,58)/1 crt=37'39 mlcod 0'0 unknown m=4 mbc={}] enter Started/Primary/Active
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.4( v 37'39 lc 33'11 (0'0,37'39] local-lis/les=58/59 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[43,58)/1 crt=37'39 lcod 0'0 mlcod 0'0 activating+degraded m=4 mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/52/0 sis=57) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.007107 7 0.000051
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 59 handle_osd_map epochs [59,59], i have 59, src has [1,59]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.007362 7 0.000523
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.006326 7 0.000300
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/52/0 sis=57) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/52/0 sis=57) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.006824 7 0.000084
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.c( v 37'39 lc 33'13 (0'0,37'39] local-lis/les=58/59 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[43,58)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.4( v 37'39 lc 33'11 (0'0,37'39] local-lis/les=58/59 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[43,58)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.4( v 37'39 lc 33'11 (0'0,37'39] local-lis/les=58/59 n=2 ec=43/21 lis/c=58/43 les/c/f=59/45/0 sis=58) [1] r=0 lpr=58 pi=[43,58)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.001084 4 0.000080
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.c( v 37'39 lc 33'13 (0'0,37'39] local-lis/les=58/59 n=1 ec=43/21 lis/c=58/43 les/c/f=59/45/0 sis=58) [1] r=0 lpr=58 pi=[43,58)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.001533 4 0.000138
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.4( v 37'39 lc 33'11 (0'0,37'39] local-lis/les=58/59 n=2 ec=43/21 lis/c=58/43 les/c/f=59/45/0 sis=58) [1] r=0 lpr=58 pi=[43,58)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.c( v 37'39 lc 33'13 (0'0,37'39] local-lis/les=58/59 n=1 ec=43/21 lis/c=58/43 les/c/f=59/45/0 sis=58) [1] r=0 lpr=58 pi=[43,58)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.4( v 37'39 lc 33'11 (0'0,37'39] local-lis/les=58/59 n=2 ec=43/21 lis/c=58/43 les/c/f=59/45/0 sis=58) [1] r=0 lpr=58 pi=[43,58)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=4 mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000111 1 0.000100
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.4( v 37'39 lc 33'11 (0'0,37'39] local-lis/les=58/59 n=2 ec=43/21 lis/c=58/43 les/c/f=59/45/0 sis=58) [1] r=0 lpr=58 pi=[43,58)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=4 mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.4( v 37'39 lc 33'11 (0'0,37'39] local-lis/les=58/59 n=2 ec=43/21 lis/c=58/43 les/c/f=59/45/0 sis=58) [1] r=0 lpr=58 pi=[43,58)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=4 mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000004 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.4( v 37'39 lc 33'11 (0'0,37'39] local-lis/les=58/59 n=2 ec=43/21 lis/c=58/43 les/c/f=59/45/0 sis=58) [1] r=0 lpr=58 pi=[43,58)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=4 mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 59 heartbeat osd_stat(store_statfs(0x4fcafc000/0x0/0x4ffc00000, data 0xbcb31/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/52/0 sis=57) [0] r=-1 lpr=57 pi=[50,57)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.208919 2 0.000147
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/52/0 sis=57) [0] r=-1 lpr=57 pi=[50,57)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.209042 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/52/0 sis=57) [0] r=-1 lpr=57 pi=[50,57)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/52/0 sis=57) [0] r=-1 lpr=57 pi=[50,57)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=58/59 n=2 ec=43/21 lis/c=58/43 les/c/f=59/45/0 sis=58) [1] r=0 lpr=58 pi=[43,58)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.276540 2 0.000069
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=58/59 n=2 ec=43/21 lis/c=58/43 les/c/f=59/45/0 sis=58) [1] r=0 lpr=58 pi=[43,58)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=58/59 n=2 ec=43/21 lis/c=58/43 les/c/f=59/45/0 sis=58) [1] r=0 lpr=58 pi=[43,58)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=58/59 n=2 ec=43/21 lis/c=58/43 les/c/f=59/45/0 sis=58) [1] r=0 lpr=58 pi=[43,58)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.c( v 37'39 lc 33'13 (0'0,37'39] local-lis/les=58/59 n=1 ec=43/21 lis/c=58/43 les/c/f=59/45/0 sis=58) [1] r=0 lpr=58 pi=[43,58)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.276734 2 0.000197
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.c( v 37'39 lc 33'13 (0'0,37'39] local-lis/les=58/59 n=1 ec=43/21 lis/c=58/43 les/c/f=59/45/0 sis=58) [1] r=0 lpr=58 pi=[43,58)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.c( v 37'39 lc 33'13 (0'0,37'39] local-lis/les=58/59 n=1 ec=43/21 lis/c=58/43 les/c/f=59/45/0 sis=58) [1] r=0 lpr=58 pi=[43,58)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000006 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.c( v 37'39 lc 33'13 (0'0,37'39] local-lis/les=58/59 n=1 ec=43/21 lis/c=58/43 les/c/f=59/45/0 sis=58) [1] r=0 lpr=58 pi=[43,58)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=-1 lpr=57 pi=[50,57)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.278675 2 0.000163
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=-1 lpr=57 pi=[50,57)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.278765 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=-1 lpr=57 pi=[50,57)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=-1 lpr=57 pi=[50,57)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=58/59 n=1 ec=43/21 lis/c=58/43 les/c/f=59/45/0 sis=58) [1] r=0 lpr=58 pi=[43,58)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.069774 1 0.000094
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/52/0 sis=57) [0] r=-1 lpr=57 pi=[50,57)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.137868 1 0.000050
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=58/59 n=1 ec=43/21 lis/c=58/43 les/c/f=59/45/0 sis=58) [1] r=0 lpr=58 pi=[43,58)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=58/59 n=1 ec=43/21 lis/c=58/43 les/c/f=59/45/0 sis=58) [1] r=0 lpr=58 pi=[43,58)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=58/59 n=1 ec=43/21 lis/c=58/43 les/c/f=59/45/0 sis=58) [1] r=0 lpr=58 pi=[43,58)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/52/0 sis=57) [0] r=-1 lpr=57 pi=[50,57)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=-1 lpr=57 pi=[50,57)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.068178 1 0.000033
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=-1 lpr=57 pi=[50,57)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=-1 lpr=57 pi=[50,57)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.414967 2 0.000022
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=-1 lpr=57 pi=[50,57)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.414989 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=-1 lpr=57 pi=[50,57)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=-1 lpr=57 pi=[50,57)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=-1 lpr=57 pi=[50,57)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000085 1 0.000047
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=-1 lpr=57 pi=[50,57)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=-1 lpr=57 pi=[50,57)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.489071 2 0.000181
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=-1 lpr=57 pi=[50,57)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.489145 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=-1 lpr=57 pi=[50,57)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=-1 lpr=57 pi=[50,57)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=-1 lpr=57 pi=[50,57)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000041 1 0.000057
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=-1 lpr=57 pi=[50,57)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.f( v 37'39 (0'0,37'39] lb MIN local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/52/0 sis=57) [0] r=-1 lpr=57 DELETING pi=[50,57)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.155702 2 0.000172
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.f( v 37'39 (0'0,37'39] lb MIN local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/52/0 sis=57) [0] r=-1 lpr=57 pi=[50,57)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.293624 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.f( v 37'39 (0'0,37'39] lb MIN local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/52/0 sis=57) [0] r=-1 lpr=57 pi=[50,57)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started 1.509834 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.7( v 37'39 (0'0,37'39] lb MIN local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=-1 lpr=57 DELETING pi=[50,57)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.170415 2 0.000134
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.7( v 37'39 (0'0,37'39] lb MIN local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=-1 lpr=57 pi=[50,57)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.238650 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.7( v 37'39 (0'0,37'39] lb MIN local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=-1 lpr=57 pi=[50,57)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started 1.524037 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.3( v 37'39 (0'0,37'39] lb MIN local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=-1 lpr=57 DELETING pi=[50,57)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.124651 2 0.000186
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.3( v 37'39 (0'0,37'39] lb MIN local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=-1 lpr=57 pi=[50,57)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.124813 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.3( v 37'39 (0'0,37'39] lb MIN local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=-1 lpr=57 pi=[50,57)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started 1.547200 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.b( v 37'39 (0'0,37'39] lb MIN local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=-1 lpr=57 DELETING pi=[50,57)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.057754 2 0.000080
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.b( v 37'39 (0'0,37'39] lb MIN local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=-1 lpr=57 pi=[50,57)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.057846 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 59 pg[6.b( v 37'39 (0'0,37'39] lb MIN local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=-1 lpr=57 pi=[50,57)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started 1.553935 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:43.503463+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 71802880 unmapped: 573440 heap: 72376320 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 59 handle_osd_map epochs [59,60], i have 59, src has [1,60]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 60 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=50) [1] r=0 lpr=50 crt=37'39 mlcod 37'39 active+clean] exit Started/Primary/Active/Clean 12.766118 24 0.000199
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 60 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=50) [1] r=0 lpr=50 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active 13.053433 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 60 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=50) [1] r=0 lpr=50 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary 14.064136 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 60 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=50) [1] r=0 lpr=50 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started 14.064150 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 60 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=50) [1] r=0 lpr=50 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 60 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=10.959738731s) [0] r=-1 lpr=60 pi=[50,60)/1 crt=37'39 mlcod 37'39 active pruub 120.224670410s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 60 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=10.959684372s) [0] r=-1 lpr=60 pi=[50,60)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 120.224670410s@ mbc={}] exit Reset 0.000084 1 0.000122
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 60 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=10.959684372s) [0] r=-1 lpr=60 pi=[50,60)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 120.224670410s@ mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 60 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=10.959684372s) [0] r=-1 lpr=60 pi=[50,60)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 120.224670410s@ mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 60 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=10.959684372s) [0] r=-1 lpr=60 pi=[50,60)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 120.224670410s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 60 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=10.959684372s) [0] r=-1 lpr=60 pi=[50,60)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 120.224670410s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 60 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=10.959684372s) [0] r=-1 lpr=60 pi=[50,60)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 120.224670410s@ mbc={}] enter Started/Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 60 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=50) [1] r=0 lpr=50 crt=37'39 mlcod 37'39 active+clean] exit Started/Primary/Active/Clean 12.433617 24 0.000089
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 60 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=50) [1] r=0 lpr=50 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active 13.053187 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 60 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=50) [1] r=0 lpr=50 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary 14.064847 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 60 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=50) [1] r=0 lpr=50 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started 14.064865 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 60 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=50) [1] r=0 lpr=50 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 60 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=10.959253311s) [0] r=-1 lpr=60 pi=[50,60)/1 crt=37'39 mlcod 37'39 active pruub 120.224632263s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 60 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=10.959200859s) [0] r=-1 lpr=60 pi=[50,60)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 120.224632263s@ mbc={}] exit Reset 0.000086 1 0.000166
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 60 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=10.959200859s) [0] r=-1 lpr=60 pi=[50,60)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 120.224632263s@ mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 60 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=10.959200859s) [0] r=-1 lpr=60 pi=[50,60)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 120.224632263s@ mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 60 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=10.959200859s) [0] r=-1 lpr=60 pi=[50,60)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 120.224632263s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 60 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=10.959200859s) [0] r=-1 lpr=60 pi=[50,60)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 120.224632263s@ mbc={}] exit Start 0.000011 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 60 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60 pruub=10.959200859s) [0] r=-1 lpr=60 pi=[50,60)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 120.224632263s@ mbc={}] enter Started/Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 60 handle_osd_map epochs [60,60], i have 60, src has [1,60]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:44.503603+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:05 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 71901184 unmapped: 475136 heap: 72376320 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 645467 data_alloc: 218103808 data_used: 143360
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 60 handle_osd_map epochs [60,61], i have 60, src has [1,61]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 61 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=-1 lpr=60 pi=[50,60)/1 crt=37'39 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.006293 7 0.000108
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 61 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=-1 lpr=60 pi=[50,60)/1 crt=37'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 61 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=-1 lpr=60 pi=[50,60)/1 crt=37'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 61 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=-1 lpr=60 pi=[50,60)/1 crt=37'39 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.006787 7 0.000084
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 61 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=-1 lpr=60 pi=[50,60)/1 crt=37'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 61 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=-1 lpr=60 pi=[50,60)/1 crt=37'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 61 handle_osd_map epochs [61,61], i have 61, src has [1,61]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 61 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=-1 lpr=60 pi=[50,60)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.070770 2 0.000029
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 61 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=-1 lpr=60 pi=[50,60)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.070799 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 61 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=-1 lpr=60 pi=[50,60)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 61 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=-1 lpr=60 pi=[50,60)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 61 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=-1 lpr=60 pi=[50,60)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000267 1 0.000085
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 61 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=-1 lpr=60 pi=[50,60)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 61 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=-1 lpr=60 pi=[50,60)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.196571 2 0.000030
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 61 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=-1 lpr=60 pi=[50,60)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.196626 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 61 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=-1 lpr=60 pi=[50,60)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 61 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=-1 lpr=60 pi=[50,60)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 61 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=-1 lpr=60 pi=[50,60)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000047 1 0.000082
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 61 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=-1 lpr=60 pi=[50,60)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 61 pg[6.d( v 37'39 (0'0,37'39] lb MIN local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=-1 lpr=60 DELETING pi=[50,60)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.131465 2 0.000129
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 61 pg[6.d( v 37'39 (0'0,37'39] lb MIN local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=-1 lpr=60 pi=[50,60)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.131775 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 61 pg[6.d( v 37'39 (0'0,37'39] lb MIN local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=-1 lpr=60 pi=[50,60)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started 1.209396 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 61 pg[6.5( v 37'39 (0'0,37'39] lb MIN local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=-1 lpr=60 DELETING pi=[50,60)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.027989 2 0.000127
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 61 pg[6.5( v 37'39 (0'0,37'39] lb MIN local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=-1 lpr=60 pi=[50,60)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.028077 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 61 pg[6.5( v 37'39 (0'0,37'39] lb MIN local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=-1 lpr=60 pi=[50,60)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started 1.231043 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:45.503721+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 59 sent 57 num 2 unsent 2 sending 2
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:41:15.311554+0000 osd.1 (osd.1) 58 : cluster [DBG] 2.4 scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:41:15.325334+0000 osd.1 (osd.1) 59 : cluster [DBG] 2.4 scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 71942144 unmapped: 434176 heap: 72376320 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 59) v1
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:41:15.311554+0000 osd.1 (osd.1) 58 : cluster [DBG] 2.4 scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:41:15.325334+0000 osd.1 (osd.1) 59 : cluster [DBG] 2.4 scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 5.f scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 5.f scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:46.503834+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 61 sent 59 num 2 unsent 2 sending 2
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:41:16.286142+0000 osd.1 (osd.1) 60 : cluster [DBG] 5.f scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:41:16.300278+0000 osd.1 (osd.1) 61 : cluster [DBG] 5.f scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 71983104 unmapped: 393216 heap: 72376320 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 61) v1
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:41:16.286142+0000 osd.1 (osd.1) 60 : cluster [DBG] 5.f scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:41:16.300278+0000 osd.1 (osd.1) 61 : cluster [DBG] 5.f scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:47.503947+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 71999488 unmapped: 376832 heap: 72376320 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 61 handle_osd_map epochs [62,63], i have 61, src has [1,63]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 62 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 27.708065 47 0.000077
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 62 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 27.719093 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 62 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 27.719131 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 62 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 27.719156 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 62 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 62 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62 pruub=12.291723251s) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 126.142326355s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 63 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62 pruub=12.288963318s) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.142326355s@ mbc={}] exit Reset 0.002796 2 0.000367
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 63 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62 pruub=12.288963318s) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.142326355s@ mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 63 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62 pruub=12.288963318s) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.142326355s@ mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 63 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62 pruub=12.288963318s) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.142326355s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 63 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62 pruub=12.288963318s) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.142326355s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 63 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62 pruub=12.288963318s) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.142326355s@ mbc={}] enter Started/Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 62 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 27.712639 47 0.000082
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 62 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 27.722077 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 62 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 27.722117 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 62 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 27.722262 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 62 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.e] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 62 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62 pruub=12.287503242s) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 126.142097473s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.e] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 63 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62 pruub=12.287465096s) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.142097473s@ mbc={}] exit Reset 0.000057 2 0.000077
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 63 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62 pruub=12.287465096s) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.142097473s@ mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 63 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62 pruub=12.287465096s) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.142097473s@ mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 63 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62 pruub=12.287465096s) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.142097473s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 63 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62 pruub=12.287465096s) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.142097473s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 63 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62 pruub=12.287465096s) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.142097473s@ mbc={}] enter Started/Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 62 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 27.712849 47 0.000190
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 62 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 27.722130 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 62 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 27.722164 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 62 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 27.722180 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 62 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.6] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 62 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62 pruub=12.287308693s) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 126.142211914s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.6] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 63 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62 pruub=12.287252426s) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.142211914s@ mbc={}] exit Reset 0.000069 2 0.000061
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 63 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62 pruub=12.287252426s) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.142211914s@ mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 63 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62 pruub=12.287252426s) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.142211914s@ mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 63 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62 pruub=12.287252426s) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.142211914s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 63 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62 pruub=12.287252426s) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.142211914s@ mbc={}] exit Start 0.000009 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 63 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62 pruub=12.287252426s) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.142211914s@ mbc={}] enter Started/Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 62 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 27.712876 47 0.000085
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 62 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 27.722255 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 62 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 27.722311 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 62 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 27.722342 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 62 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.e] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.6] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 62 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62 pruub=12.286839485s) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 126.142440796s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 63 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62 pruub=12.286744118s) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.142440796s@ mbc={}] exit Reset 0.000195 2 0.000452
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 63 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62 pruub=12.286744118s) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.142440796s@ mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 63 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62 pruub=12.286744118s) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.142440796s@ mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 63 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62 pruub=12.286744118s) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.142440796s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 63 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62 pruub=12.286744118s) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.142440796s@ mbc={}] exit Start 0.000058 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 63 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62 pruub=12.286744118s) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.142440796s@ mbc={}] enter Started/Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:48.504142+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fcaf4000/0x0/0x4ffc00000, data 0xc1fb5/0x128000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 319488 heap: 72376320 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 63 handle_osd_map epochs [64,64], i have 63, src has [1,64]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.304044724s of 10.369025230s, submitted: 83
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.751687 3 0.000033
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.751716 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000049 1 0.000071
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000021 1 0.000027
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000017 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.750762 3 0.000138
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.750865 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000028 1 0.000042
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000005 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000017 1 0.000026
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000021 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.751693 3 0.000058
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.751795 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000063 1 0.000165
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000006 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000029 1 0.000036
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000023 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.752967 3 0.000061
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.753661 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=62) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000038 1 0.000725
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000017 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000030 1 0.000287
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000015 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 64 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:49.504233+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:05 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 294912 heap: 72376320 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 651899 data_alloc: 218103808 data_used: 143360
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 64 handle_osd_map epochs [65,65], i have 64, src has [1,65]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 64 handle_osd_map epochs [64,65], i have 65, src has [1,65]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.001129 4 0.000177
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.001209 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.000882 4 0.000071
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.000994 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.001365 4 0.000054
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.001443 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.000442 4 0.000059
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.000667 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Activating
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 29.466750 54 0.000072
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 29.476135 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 29.476169 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 29.476183 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.8] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=65 pruub=10.533356667s) [2] r=-1 lpr=65 pi=[45,65)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 126.142112732s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.8] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=65 pruub=10.533308983s) [2] r=-1 lpr=65 pi=[45,65)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.142112732s@ mbc={}] exit Reset 0.000066 1 0.000090
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=65 pruub=10.533308983s) [2] r=-1 lpr=65 pi=[45,65)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.142112732s@ mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=65 pruub=10.533308983s) [2] r=-1 lpr=65 pi=[45,65)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.142112732s@ mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=65 pruub=10.533308983s) [2] r=-1 lpr=65 pi=[45,65)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.142112732s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=65 pruub=10.533308983s) [2] r=-1 lpr=65 pi=[45,65)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.142112732s@ mbc={}] exit Start 0.000007 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=65 pruub=10.533308983s) [2] r=-1 lpr=65 pi=[45,65)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.142112732s@ mbc={}] enter Started/Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 29.467082 54 0.000083
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 29.475934 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 29.475988 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 29.476013 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.18] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=65 pruub=10.532989502s) [2] r=-1 lpr=65 pi=[45,65)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 126.142852783s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.18] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=65 pruub=10.532886505s) [2] r=-1 lpr=65 pi=[45,65)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.142852783s@ mbc={}] exit Reset 0.000140 1 0.000223
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=65 pruub=10.532886505s) [2] r=-1 lpr=65 pi=[45,65)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.142852783s@ mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.8] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=65 pruub=10.532886505s) [2] r=-1 lpr=65 pi=[45,65)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.142852783s@ mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=65 pruub=10.532886505s) [2] r=-1 lpr=65 pi=[45,65)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.142852783s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=65 pruub=10.532886505s) [2] r=-1 lpr=65 pi=[45,65)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.142852783s@ mbc={}] exit Start 0.000093 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=65 pruub=10.532886505s) [2] r=-1 lpr=65 pi=[45,65)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.142852783s@ mbc={}] enter Started/Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.001874 5 0.000998
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.18] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.001830 1 0.000049
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.004685 5 0.000674
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000764 1 0.000030
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.005235 5 0.000308
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/Activating 0.005378 5 0.000338
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.039193 2 0.000030
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.039239 1 0.000025
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.050390 1 0.000031
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.028337 2 0.000052
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.118032 1 0.000025
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.022168 1 0.000020
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.035401 2 0.000035
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.175578 1 0.000015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.035937 1 0.000029
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Recovering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.042468 2 0.000081
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 65 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:50.504340+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 221184 heap: 72376320 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 65 handle_osd_map epochs [66,66], i have 65, src has [1,66]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.877433 1 0.000080
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.000782 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.001463 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.001603 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66 pruub=15.004311562s) [2] async=[2] r=-1 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 131.613037109s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66 pruub=15.004244804s) [2] r=-1 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 131.613037109s@ mbc={}] exit Reset 0.000099 1 0.000151
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66 pruub=15.004244804s) [2] r=-1 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 131.613037109s@ mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66 pruub=15.004244804s) [2] r=-1 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 131.613037109s@ mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66 pruub=15.004244804s) [2] r=-1 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 131.613037109s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66 pruub=15.004244804s) [2] r=-1 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 131.613037109s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66 pruub=15.004244804s) [2] r=-1 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 131.613037109s@ mbc={}] enter Started/Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.956689 1 0.000138
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.001313 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.002815 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.002854 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.e] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=65) [2] r=-1 lpr=65 pi=[45,65)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.000553 3 0.000057
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=65) [2] r=-1 lpr=65 pi=[45,65)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.000582 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=65) [2] r=-1 lpr=65 pi=[45,65)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=66) [2]/[1] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=66) [2]/[1] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000038 1 0.000058
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=66) [2]/[1] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=66) [2]/[1] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=66) [2]/[1] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=66) [2]/[1] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000005 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=66) [2]/[1] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=66) [2]/[1] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=66) [2]/[1] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.742034 1 0.000085
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.001595 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.002604 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.002624 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66 pruub=15.001119614s) [2] async=[2] r=-1 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 131.610458374s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.6] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66 pruub=15.003470421s) [2] async=[2] r=-1 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 131.613113403s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.6] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66 pruub=15.003413200s) [2] r=-1 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 131.613113403s@ mbc={}] exit Reset 0.000113 1 0.000250
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.820802 1 0.000079
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66 pruub=15.003413200s) [2] r=-1 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 131.613113403s@ mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.001856 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66 pruub=15.003413200s) [2] r=-1 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 131.613113403s@ mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66 pruub=15.003413200s) [2] r=-1 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 131.613113403s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.003079 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66 pruub=15.003413200s) [2] r=-1 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 131.613113403s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.003094 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66 pruub=15.003413200s) [2] r=-1 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 131.613113403s@ mbc={}] enter Started/Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[45,64)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66 pruub=15.003370285s) [2] async=[2] r=-1 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 131.613098145s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66 pruub=15.003334045s) [2] r=-1 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 131.613098145s@ mbc={}] exit Reset 0.000053 1 0.000073
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66 pruub=15.003334045s) [2] r=-1 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 131.613098145s@ mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66 pruub=15.003334045s) [2] r=-1 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 131.613098145s@ mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66 pruub=15.003334045s) [2] r=-1 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 131.613098145s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66 pruub=15.003334045s) [2] r=-1 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 131.613098145s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66 pruub=15.003334045s) [2] r=-1 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 131.613098145s@ mbc={}] enter Started/Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.e] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66 pruub=15.000521660s) [2] r=-1 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 131.610458374s@ mbc={}] exit Reset 0.000654 1 0.000782
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66 pruub=15.000521660s) [2] r=-1 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 131.610458374s@ mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66 pruub=15.000521660s) [2] r=-1 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 131.610458374s@ mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66 pruub=15.000521660s) [2] r=-1 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 131.610458374s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66 pruub=15.000521660s) [2] r=-1 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 131.610458374s@ mbc={}] exit Start 0.000123 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=65) [2] r=-1 lpr=65 pi=[45,65)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.000024 3 0.000175
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=65) [2] r=-1 lpr=65 pi=[45,65)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.000179 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=65) [2] r=-1 lpr=65 pi=[45,65)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66 pruub=15.000521660s) [2] r=-1 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 131.610458374s@ mbc={}] enter Started/Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=66) [2]/[1] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=66) [2]/[1] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000228 1 0.000270
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=66) [2]/[1] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=66) [2]/[1] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=66) [2]/[1] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=66) [2]/[1] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000005 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=66) [2]/[1] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=66) [2]/[1] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=66) [2]/[1] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 66 handle_osd_map epochs [66,66], i have 66, src has [1,66]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.e] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=66) [2]/[1] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.001023 2 0.000080
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=66) [2]/[1] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000023 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=66) [2]/[1] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.002742 2 0.000030
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=66) [2]/[1] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000040 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 66 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.e] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.6] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.6] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:51.504437+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 237568 heap: 72376320 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 66 handle_osd_map epochs [66,67], i have 66, src has [1,67]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 66 handle_osd_map epochs [67,67], i have 67, src has [1,67]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.997239 3 0.000094
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.000082 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=66/67 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Activating
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=50) [1] r=0 lpr=50 crt=37'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 21.385060 44 0.000227
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=50) [1] r=0 lpr=50 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 21.397229 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=50) [1] r=0 lpr=50 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 22.409637 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=50) [1] r=0 lpr=50 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 22.409650 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=50) [1] r=0 lpr=50 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=67 pruub=10.614984512s) [0] r=-1 lpr=67 pi=[50,67)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 128.224777222s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=67 pruub=10.614945412s) [0] r=-1 lpr=67 pi=[50,67)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.224777222s@ mbc={}] exit Reset 0.000056 1 0.000083
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=67 pruub=10.614945412s) [0] r=-1 lpr=67 pi=[50,67)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.224777222s@ mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=67 pruub=10.614945412s) [0] r=-1 lpr=67 pi=[50,67)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.224777222s@ mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=67 pruub=10.614945412s) [0] r=-1 lpr=67 pi=[50,67)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.224777222s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=67 pruub=10.614945412s) [0] r=-1 lpr=67 pi=[50,67)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.224777222s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=67 pruub=10.614945412s) [0] r=-1 lpr=67 pi=[50,67)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.224777222s@ mbc={}] enter Started/Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.998672 3 0.000118
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.999819 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 67 handle_osd_map epochs [67,67], i have 67, src has [1,67]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 67 handle_osd_map epochs [67,67], i have 67, src has [1,67]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=66/67 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.012252 5 0.000210
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=66/67 n=6 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/Activating 0.012971 5 0.000201
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=66/67 n=6 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000071 1 0.000024
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000599 1 0.000018
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=-1 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.016323 7 0.000088
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=-1 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=-1 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=-1 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.014996 7 0.000349
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=-1 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.015460 7 0.000108
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=-1 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=-1 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=-1 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=-1 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=-1 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.015399 7 0.000072
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=-1 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=-1 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.028356 2 0.000033
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=66/67 n=6 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.029084 1 0.000021
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=66/67 n=6 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=66/67 n=6 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000451 1 0.000059
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=66/67 n=6 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Recovering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=66/67 n=6 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.052361 2 0.000033
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=66/67 n=6 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=-1 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.079419 1 0.000026
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=-1 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=-1 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.079506 1 0.000018
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=-1 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.e] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=-1 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.079574 1 0.000021
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=-1 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.6] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=-1 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.079664 1 0.000010
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=-1 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[9.16( v 44'389 (0'0,44'389] lb MIN local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=-1 lpr=66 DELETING pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.059351 2 0.000136
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[9.16( v 44'389 (0'0,44'389] lb MIN local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=-1 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.138833 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[9.16( v 44'389 (0'0,44'389] lb MIN local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=-1 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.155202 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[9.e( v 44'389 (0'0,44'389] lb MIN local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=-1 lpr=66 DELETING pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.096412 2 0.000198
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[9.e( v 44'389 (0'0,44'389] lb MIN local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=-1 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.175956 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[9.e( v 44'389 (0'0,44'389] lb MIN local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=-1 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.191191 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.e] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[9.6( v 44'389 (0'0,44'389] lb MIN local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=-1 lpr=66 DELETING pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.140749 2 0.000173
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[9.6( v 44'389 (0'0,44'389] lb MIN local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=-1 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.220394 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[9.6( v 44'389 (0'0,44'389] lb MIN local-lis/les=64/65 n=6 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=-1 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.235898 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.6] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[9.1e( v 44'389 (0'0,44'389] lb MIN local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=-1 lpr=66 DELETING pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.177719 2 0.000090
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[9.1e( v 44'389 (0'0,44'389] lb MIN local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=-1 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.257424 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 67 pg[9.1e( v 44'389 (0'0,44'389] lb MIN local-lis/les=64/65 n=5 ec=45/34 lis/c=64/45 les/c/f=65/46/0 sis=66) [2] r=-1 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.272857 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:52.504574+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 72187904 unmapped: 188416 heap: 72376320 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 67 handle_osd_map epochs [67,68], i have 67, src has [1,68]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 68 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=66/67 n=6 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.920319 1 0.000055
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 68 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=66/67 n=6 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.015370 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 68 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=66/67 n=6 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.015481 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 68 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=66/67 n=6 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.015508 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 68 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=66/67 n=6 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 68 handle_osd_map epochs [68,68], i have 68, src has [1,68]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.8] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 68 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=66/67 n=6 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68 pruub=14.997215271s) [2] async=[2] r=-1 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 133.622528076s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 68 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.973719 1 0.000072
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 68 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.015124 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 68 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.014962 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 68 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.014978 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 68 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[45,66)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.18] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 68 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68 pruub=14.997069359s) [2] async=[2] r=-1 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 133.622497559s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.18] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 68 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68 pruub=14.997024536s) [2] r=-1 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.622497559s@ mbc={}] exit Reset 0.000065 1 0.000088
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 68 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68 pruub=14.997024536s) [2] r=-1 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.622497559s@ mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 68 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68 pruub=14.997024536s) [2] r=-1 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.622497559s@ mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 68 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68 pruub=14.997024536s) [2] r=-1 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.622497559s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 68 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68 pruub=14.997024536s) [2] r=-1 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.622497559s@ mbc={}] exit Start 0.000009 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 68 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68 pruub=14.997024536s) [2] r=-1 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.622497559s@ mbc={}] enter Started/Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.8] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 68 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=66/67 n=6 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68 pruub=14.996996880s) [2] r=-1 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.622528076s@ mbc={}] exit Reset 0.000575 1 0.000652
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 68 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=66/67 n=6 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68 pruub=14.996996880s) [2] r=-1 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.622528076s@ mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 68 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=66/67 n=6 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68 pruub=14.996996880s) [2] r=-1 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.622528076s@ mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 68 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=66/67 n=6 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68 pruub=14.996996880s) [2] r=-1 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.622528076s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 68 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=66/67 n=6 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68 pruub=14.996996880s) [2] r=-1 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.622528076s@ mbc={}] exit Start 0.000039 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 68 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=66/67 n=6 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68 pruub=14.996996880s) [2] r=-1 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.622528076s@ mbc={}] enter Started/Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.8] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 68 handle_osd_map epochs [68,68], i have 68, src has [1,68]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 68 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=67) [0] r=-1 lpr=67 pi=[50,67)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.019910 7 0.000053
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.8] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 68 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=67) [0] r=-1 lpr=67 pi=[50,67)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 68 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=67) [0] r=-1 lpr=67 pi=[50,67)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 68 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=67) [0] r=-1 lpr=67 pi=[50,67)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000204 1 0.000208
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 68 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=67) [0] r=-1 lpr=67 pi=[50,67)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.18] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.18] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 68 pg[6.9( v 37'39 (0'0,37'39] lb MIN local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=67) [0] r=-1 lpr=67 DELETING pi=[50,67)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.002006 1 0.000030
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 68 pg[6.9( v 37'39 (0'0,37'39] lb MIN local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=67) [0] r=-1 lpr=67 pi=[50,67)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.002283 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 68 pg[6.9( v 37'39 (0'0,37'39] lb MIN local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=67) [0] r=-1 lpr=67 pi=[50,67)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.022332 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:53.504727+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 163840 heap: 72376320 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _renew_subs
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 68 handle_osd_map epochs [69,69], i have 68, src has [1,69]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 69 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68) [2] r=-1 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.117534 6 0.000096
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 69 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68) [2] r=-1 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 69 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68) [2] r=-1 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 69 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=66/67 n=6 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68) [2] r=-1 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.117511 6 0.000402
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 69 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=66/67 n=6 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68) [2] r=-1 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 69 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=66/67 n=6 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68) [2] r=-1 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 69 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68) [2] r=-1 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000376 1 0.000039
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 69 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=66/67 n=5 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68) [2] r=-1 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.18] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 69 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=55/56 n=1 ec=43/21 lis/c=55/55 les/c/f=56/56/0 sis=55) [1] r=0 lpr=55 crt=37'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 15.507149 35 0.000093
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 69 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=55/56 n=1 ec=43/21 lis/c=55/55 les/c/f=56/56/0 sis=55) [1] r=0 lpr=55 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 15.510507 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 69 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=55/56 n=1 ec=43/21 lis/c=55/55 les/c/f=56/56/0 sis=55) [1] r=0 lpr=55 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 16.505593 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 69 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=55/56 n=1 ec=43/21 lis/c=55/55 les/c/f=56/56/0 sis=55) [1] r=0 lpr=55 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 16.505613 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 69 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=55/56 n=1 ec=43/21 lis/c=55/55 les/c/f=56/56/0 sis=55) [1] r=0 lpr=55 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 69 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=55/56 n=1 ec=43/21 lis/c=55/55 les/c/f=56/56/0 sis=69 pruub=8.492801666s) [0] r=-1 lpr=69 pi=[55,69)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 128.236526489s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 69 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=55/56 n=1 ec=43/21 lis/c=55/55 les/c/f=56/56/0 sis=69 pruub=8.492767334s) [0] r=-1 lpr=69 pi=[55,69)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.236526489s@ mbc={}] exit Reset 0.000054 1 0.000077
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 69 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=55/56 n=1 ec=43/21 lis/c=55/55 les/c/f=56/56/0 sis=69 pruub=8.492767334s) [0] r=-1 lpr=69 pi=[55,69)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.236526489s@ mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 69 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=55/56 n=1 ec=43/21 lis/c=55/55 les/c/f=56/56/0 sis=69 pruub=8.492767334s) [0] r=-1 lpr=69 pi=[55,69)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.236526489s@ mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 69 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=55/56 n=1 ec=43/21 lis/c=55/55 les/c/f=56/56/0 sis=69 pruub=8.492767334s) [0] r=-1 lpr=69 pi=[55,69)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.236526489s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 69 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=55/56 n=1 ec=43/21 lis/c=55/55 les/c/f=56/56/0 sis=69 pruub=8.492767334s) [0] r=-1 lpr=69 pi=[55,69)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.236526489s@ mbc={}] exit Start 0.000007 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 69 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=55/56 n=1 ec=43/21 lis/c=55/55 les/c/f=56/56/0 sis=69 pruub=8.492767334s) [0] r=-1 lpr=69 pi=[55,69)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.236526489s@ mbc={}] enter Started/Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 69 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=66/67 n=6 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68) [2] r=-1 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.001399 2 0.000028
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 69 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=66/67 n=6 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68) [2] r=-1 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.8] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 69 handle_osd_map epochs [68,69], i have 69, src has [1,69]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 69 pg[9.18( v 44'389 (0'0,44'389] lb MIN local-lis/les=66/67 n=5 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68) [2] r=-1 lpr=68 DELETING pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.038871 3 0.000120
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 69 pg[9.18( v 44'389 (0'0,44'389] lb MIN local-lis/les=66/67 n=5 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68) [2] r=-1 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.039305 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 69 pg[9.18( v 44'389 (0'0,44'389] lb MIN local-lis/les=66/67 n=5 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68) [2] r=-1 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.156878 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.18] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 69 pg[9.8( v 44'389 (0'0,44'389] lb MIN local-lis/les=66/67 n=6 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68) [2] r=-1 lpr=68 DELETING pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.089449 2 0.000141
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 69 pg[9.8( v 44'389 (0'0,44'389] lb MIN local-lis/les=66/67 n=6 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68) [2] r=-1 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.090882 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 69 pg[9.8( v 44'389 (0'0,44'389] lb MIN local-lis/les=66/67 n=6 ec=45/34 lis/c=66/45 les/c/f=67/46/0 sis=68) [2] r=-1 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.208599 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.8] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:54.504847+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 69 heartbeat osd_stat(store_statfs(0x4fcae2000/0x0/0x4ffc00000, data 0xce5ca/0x13b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:05 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:05 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 147456 heap: 72376320 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 608555 data_alloc: 218103808 data_used: 126976
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _renew_subs
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 69 handle_osd_map epochs [70,70], i have 69, src has [1,70]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 70 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=55/56 n=1 ec=43/21 lis/c=55/55 les/c/f=56/56/0 sis=69) [0] r=-1 lpr=69 pi=[55,69)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.957593 6 0.000057
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 70 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=55/56 n=1 ec=43/21 lis/c=55/55 les/c/f=56/56/0 sis=69) [0] r=-1 lpr=69 pi=[55,69)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 70 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=55/56 n=1 ec=43/21 lis/c=55/55 les/c/f=56/56/0 sis=69) [0] r=-1 lpr=69 pi=[55,69)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 70 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=55/56 n=1 ec=43/21 lis/c=55/55 les/c/f=56/56/0 sis=69) [0] r=-1 lpr=69 pi=[55,69)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000288 1 0.000157
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 70 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=55/56 n=1 ec=43/21 lis/c=55/55 les/c/f=56/56/0 sis=69) [0] r=-1 lpr=69 pi=[55,69)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 70 pg[6.a( v 37'39 (0'0,37'39] lb MIN local-lis/les=55/56 n=1 ec=43/21 lis/c=55/55 les/c/f=56/56/0 sis=69) [0] r=-1 lpr=69 DELETING pi=[55,69)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.001542 2 0.000184
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 70 pg[6.a( v 37'39 (0'0,37'39] lb MIN local-lis/les=55/56 n=1 ec=43/21 lis/c=55/55 les/c/f=56/56/0 sis=69) [0] r=-1 lpr=69 pi=[55,69)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.001908 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 70 pg[6.a( v 37'39 (0'0,37'39] lb MIN local-lis/les=55/56 n=1 ec=43/21 lis/c=55/55 les/c/f=56/56/0 sis=69) [0] r=-1 lpr=69 pi=[55,69)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.959578 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:55.504946+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 70 heartbeat osd_stat(store_statfs(0x4fcae0000/0x0/0x4ffc00000, data 0xd020e/0x13c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 131072 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:56.505054+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 73302016 unmapped: 122880 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:57.505171+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 73302016 unmapped: 122880 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 70 handle_osd_map epochs [71,72], i have 70, src has [1,72]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 72 pg[6.b(unlocked)] enter Initial
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 72 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=71) [1] r=0 lpr=0 pi=[57,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000043 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 72 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=71) [1] r=0 lpr=0 pi=[57,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 72 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=71) [1] r=0 lpr=72 pi=[57,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000022
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 72 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=71) [1] r=0 lpr=72 pi=[57,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 72 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=71) [1] r=0 lpr=72 pi=[57,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 72 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=71) [1] r=0 lpr=72 pi=[57,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 72 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=71) [1] r=0 lpr=72 pi=[57,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000010 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 72 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=71) [1] r=0 lpr=72 pi=[57,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 72 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=71) [1] r=0 lpr=72 pi=[57,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 72 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=71) [1] r=0 lpr=72 pi=[57,71)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 72 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=71) [1] r=0 lpr=72 pi=[57,71)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000120 1 0.000043
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 72 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=71) [1] r=0 lpr=72 pi=[57,71)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 72 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 37.520445 74 0.000216
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 72 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 37.524534 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 72 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 37.524576 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 72 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 37.524591 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 72 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.c] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 72 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=72 pruub=10.479932785s) [2] r=-1 lpr=72 pi=[45,72)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 134.136596680s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.c] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 72 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=72 pruub=10.479909897s) [2] r=-1 lpr=72 pi=[45,72)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.136596680s@ mbc={}] exit Reset 0.000041 1 0.000060
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 72 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=72 pruub=10.479909897s) [2] r=-1 lpr=72 pi=[45,72)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.136596680s@ mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 72 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=72 pruub=10.479909897s) [2] r=-1 lpr=72 pi=[45,72)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.136596680s@ mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 72 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=72 pruub=10.479909897s) [2] r=-1 lpr=72 pi=[45,72)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.136596680s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 72 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=72 pruub=10.479909897s) [2] r=-1 lpr=72 pi=[45,72)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.136596680s@ mbc={}] exit Start 0.000010 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 72 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=72 pruub=10.479909897s) [2] r=-1 lpr=72 pi=[45,72)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.136596680s@ mbc={}] enter Started/Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 72 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 37.515112 74 0.000125
Nov 26 11:59:05 compute-0 ceph-osd[89074]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 72 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=71) [1] r=0 lpr=72 pi=[57,71)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.001628 2 0.000041
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 72 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=71) [1] r=0 lpr=72 pi=[57,71)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 72 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=71) [1] r=0 lpr=72 pi=[57,71)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000015 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 72 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=71) [1] r=0 lpr=72 pi=[57,71)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 72 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 37.524623 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 72 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 37.524674 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 72 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 37.524700 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 72 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 72 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=72 pruub=10.484605789s) [2] r=-1 lpr=72 pi=[45,72)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 134.142684937s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 72 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=72 pruub=10.484507561s) [2] r=-1 lpr=72 pi=[45,72)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.142684937s@ mbc={}] exit Reset 0.000126 1 0.000670
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 72 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=72 pruub=10.484507561s) [2] r=-1 lpr=72 pi=[45,72)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.142684937s@ mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 72 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=72 pruub=10.484507561s) [2] r=-1 lpr=72 pi=[45,72)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.142684937s@ mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 72 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=72 pruub=10.484507561s) [2] r=-1 lpr=72 pi=[45,72)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.142684937s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 72 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=72 pruub=10.484507561s) [2] r=-1 lpr=72 pi=[45,72)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.142684937s@ mbc={}] exit Start 0.000044 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 72 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=72 pruub=10.484507561s) [2] r=-1 lpr=72 pi=[45,72)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.142684937s@ mbc={}] enter Started/Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.c] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 72 handle_osd_map epochs [71,72], i have 72, src has [1,72]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 5.c scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 5.c scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:58.505339+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 63 sent 61 num 2 unsent 2 sending 2
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:41:28.162008+0000 osd.1 (osd.1) 62 : cluster [DBG] 5.c scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:41:28.176189+0000 osd.1 (osd.1) 63 : cluster [DBG] 5.c scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 72302592 unmapped: 1122304 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 72 handle_osd_map epochs [72,73], i have 72, src has [1,73]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.981673241s of 10.059293747s, submitted: 99
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 72 handle_osd_map epochs [73,73], i have 73, src has [1,73]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 63) v1
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:41:28.162008+0000 osd.1 (osd.1) 62 : cluster [DBG] 5.c scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:41:28.176189+0000 osd.1 (osd.1) 63 : cluster [DBG] 5.c scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 73 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=72) [2] r=-1 lpr=72 pi=[45,72)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.007140 3 0.000132
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 73 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=72) [2] r=-1 lpr=72 pi=[45,72)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.007232 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 73 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=72) [2] r=-1 lpr=72 pi=[45,72)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 73 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=73) [2]/[1] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 73 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=73) [2]/[1] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000045 1 0.000069
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 73 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=73) [2]/[1] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 73 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=73) [2]/[1] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 73 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=73) [2]/[1] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 73 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=73) [2]/[1] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 73 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=73) [2]/[1] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 73 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=73) [2]/[1] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 73 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=73) [2]/[1] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 73 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=73) [2]/[1] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000036 1 0.000036
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 73 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=73) [2]/[1] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 73 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000020 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 73 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 73 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 73 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 73 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=72) [2] r=-1 lpr=72 pi=[45,72)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.008945 3 0.000060
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 73 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=72) [2] r=-1 lpr=72 pi=[45,72)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.008968 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 73 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=72) [2] r=-1 lpr=72 pi=[45,72)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 73 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=73) [2]/[1] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 73 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=73) [2]/[1] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000026 1 0.000035
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 73 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=73) [2]/[1] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 73 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=73) [2]/[1] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 73 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=73) [2]/[1] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 73 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=73) [2]/[1] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000015 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 73 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=73) [2]/[1] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 73 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=73) [2]/[1] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 73 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=73) [2]/[1] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 73 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=73) [2]/[1] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000015 1 0.000034
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 73 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=73) [2]/[1] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 73 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000012 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 73 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 73 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 73 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 73 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=71) [1] r=0 lpr=72 pi=[57,71)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.008200 2 0.000145
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 73 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=71) [1] r=0 lpr=72 pi=[57,71)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 1.010018 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 73 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=71) [1] r=0 lpr=72 pi=[57,71)/1 crt=37'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 73 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=71/73 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=71) [1] r=0 lpr=72 pi=[57,71)/1 crt=37'39 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 73 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=71/73 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=71) [1] r=0 lpr=72 pi=[57,71)/1 crt=37'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 73 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=71/73 n=1 ec=43/21 lis/c=71/57 les/c/f=73/59/0 sis=71) [1] r=0 lpr=72 pi=[57,71)/1 crt=37'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.001608 3 0.000139
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 73 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=71/73 n=1 ec=43/21 lis/c=71/57 les/c/f=73/59/0 sis=71) [1] r=0 lpr=72 pi=[57,71)/1 crt=37'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 73 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=71/73 n=1 ec=43/21 lis/c=71/57 les/c/f=73/59/0 sis=71) [1] r=0 lpr=72 pi=[57,71)/1 crt=37'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000057 1 0.000050
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 73 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=71/73 n=1 ec=43/21 lis/c=71/57 les/c/f=73/59/0 sis=71) [1] r=0 lpr=72 pi=[57,71)/1 crt=37'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 73 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=71/73 n=1 ec=43/21 lis/c=71/57 les/c/f=73/59/0 sis=71) [1] r=0 lpr=72 pi=[57,71)/1 crt=37'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000014 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 73 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=71/73 n=1 ec=43/21 lis/c=71/57 les/c/f=73/59/0 sis=71) [1] r=0 lpr=72 pi=[57,71)/1 crt=37'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 73 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=71/73 n=1 ec=43/21 lis/c=71/57 les/c/f=73/59/0 sis=71) [1] r=0 lpr=72 pi=[57,71)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.008085 3 0.000072
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 73 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=71/73 n=1 ec=43/21 lis/c=71/57 les/c/f=73/59/0 sis=71) [1] r=0 lpr=72 pi=[57,71)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 73 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=71/73 n=1 ec=43/21 lis/c=71/57 les/c/f=73/59/0 sis=71) [1] r=0 lpr=72 pi=[57,71)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 73 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=71/73 n=1 ec=43/21 lis/c=71/57 les/c/f=73/59/0 sis=71) [1] r=0 lpr=72 pi=[57,71)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:59.505506+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:05 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 1040384 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 622376 data_alloc: 218103808 data_used: 126976
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 73 handle_osd_map epochs [74,74], i have 73, src has [1,74]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 74 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.007414 4 0.000032
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 74 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.007474 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 74 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 74 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 74 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.008077 4 0.000053
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 74 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.008176 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 74 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 74 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Activating
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 74 handle_osd_map epochs [74,74], i have 74, src has [1,74]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 74 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 74 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 74 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=6 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.007267 5 0.000165
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 74 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=6 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 74 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=6 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000128 1 0.000028
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 74 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=6 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 74 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=5 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/Activating 0.007030 5 0.000500
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 74 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=5 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 74 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=6 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000218 1 0.000067
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 74 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=6 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 74 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=6 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.035396 2 0.000051
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 74 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=6 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 74 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=5 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.035632 1 0.000049
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 74 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=5 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 74 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=5 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000524 1 0.000026
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 74 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=5 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Recovering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 74 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=5 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.052221 2 0.000028
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 74 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=5 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:00.505606+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 72335360 unmapped: 1089536 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 74 heartbeat osd_stat(store_statfs(0x4fcad1000/0x0/0x4ffc00000, data 0xd8b75/0x14b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 74 handle_osd_map epochs [74,75], i have 74, src has [1,75]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 74 handle_osd_map epochs [75,75], i have 75, src has [1,75]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 75 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=6 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.963640 1 0.000074
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 75 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=6 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.006808 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 75 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=6 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.014297 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 75 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=6 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.014324 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 75 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=6 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.c] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 75 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=6 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75 pruub=15.000406265s) [2] async=[2] r=-1 lpr=75 pi=[45,75)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 141.680450439s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.c] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 75 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=6 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75 pruub=15.000348091s) [2] r=-1 lpr=75 pi=[45,75)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.680450439s@ mbc={}] exit Reset 0.000084 1 0.000126
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 75 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=6 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75 pruub=15.000348091s) [2] r=-1 lpr=75 pi=[45,75)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.680450439s@ mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 75 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=6 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75 pruub=15.000348091s) [2] r=-1 lpr=75 pi=[45,75)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.680450439s@ mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 75 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=6 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75 pruub=15.000348091s) [2] r=-1 lpr=75 pi=[45,75)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.680450439s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 75 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=6 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75 pruub=15.000348091s) [2] r=-1 lpr=75 pi=[45,75)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.680450439s@ mbc={}] exit Start 0.000007 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 75 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=6 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75 pruub=15.000348091s) [2] r=-1 lpr=75 pi=[45,75)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.680450439s@ mbc={}] enter Started/Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 75 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=5 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.911342 1 0.000056
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 75 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=5 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.006919 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 75 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=5 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.015108 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 75 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=5 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.015130 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 75 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=5 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[45,73)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 75 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=5 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75 pruub=14.999793053s) [2] async=[2] r=-1 lpr=75 pi=[45,75)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 141.680419922s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 75 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=5 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75 pruub=14.999548912s) [2] r=-1 lpr=75 pi=[45,75)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.680419922s@ mbc={}] exit Reset 0.000265 1 0.000291
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 75 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=5 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75 pruub=14.999548912s) [2] r=-1 lpr=75 pi=[45,75)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.680419922s@ mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 75 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=5 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75 pruub=14.999548912s) [2] r=-1 lpr=75 pi=[45,75)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.680419922s@ mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 75 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=5 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75 pruub=14.999548912s) [2] r=-1 lpr=75 pi=[45,75)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.680419922s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 75 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=5 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75 pruub=14.999548912s) [2] r=-1 lpr=75 pi=[45,75)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.680419922s@ mbc={}] exit Start 0.000009 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 75 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=5 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75 pruub=14.999548912s) [2] r=-1 lpr=75 pi=[45,75)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.680419922s@ mbc={}] enter Started/Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.c] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 75 handle_osd_map epochs [75,75], i have 75, src has [1,75]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.c] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:01.505726+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 1040384 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _renew_subs
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 75 handle_osd_map epochs [76,76], i have 75, src has [1,76]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 76 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=5 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75) [2] r=-1 lpr=75 pi=[45,75)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.031107 6 0.000074
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 76 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=5 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75) [2] r=-1 lpr=75 pi=[45,75)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 76 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=5 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75) [2] r=-1 lpr=75 pi=[45,75)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 76 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=6 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75) [2] r=-1 lpr=75 pi=[45,75)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.032264 6 0.000088
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 76 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=5 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75) [2] r=-1 lpr=75 pi=[45,75)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000387 1 0.000035
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 76 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=5 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75) [2] r=-1 lpr=75 pi=[45,75)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 76 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=6 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75) [2] r=-1 lpr=75 pi=[45,75)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 76 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=6 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75) [2] r=-1 lpr=75 pi=[45,75)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 76 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=6 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75) [2] r=-1 lpr=75 pi=[45,75)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000486 2 0.000079
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 76 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=73/74 n=6 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75) [2] r=-1 lpr=75 pi=[45,75)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.c] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 76 pg[9.1c( v 44'389 (0'0,44'389] lb MIN local-lis/les=73/74 n=5 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75) [2] r=-1 lpr=75 DELETING pi=[45,75)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.071503 3 0.000336
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 76 pg[9.1c( v 44'389 (0'0,44'389] lb MIN local-lis/les=73/74 n=5 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75) [2] r=-1 lpr=75 pi=[45,75)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.071937 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 76 pg[9.1c( v 44'389 (0'0,44'389] lb MIN local-lis/les=73/74 n=5 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75) [2] r=-1 lpr=75 pi=[45,75)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.103092 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 76 pg[9.c( v 44'389 (0'0,44'389] lb MIN local-lis/les=73/74 n=6 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75) [2] r=-1 lpr=75 DELETING pi=[45,75)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.111510 2 0.000133
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 76 pg[9.c( v 44'389 (0'0,44'389] lb MIN local-lis/les=73/74 n=6 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75) [2] r=-1 lpr=75 pi=[45,75)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.112072 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 76 pg[9.c( v 44'389 (0'0,44'389] lb MIN local-lis/les=73/74 n=6 ec=45/34 lis/c=73/45 les/c/f=74/46/0 sis=75) [2] r=-1 lpr=75 pi=[45,75)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.144410 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.c] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:02.505816+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 65 sent 63 num 2 unsent 2 sending 2
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:41:32.193361+0000 osd.1 (osd.1) 64 : cluster [DBG] 5.1d scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:41:32.311135+0000 osd.1 (osd.1) 65 : cluster [DBG] 5.1d scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 72400896 unmapped: 1024000 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 65) v1
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:41:32.193361+0000 osd.1 (osd.1) 64 : cluster [DBG] 5.1d scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:41:32.311135+0000 osd.1 (osd.1) 65 : cluster [DBG] 5.1d scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:03.505948+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 72400896 unmapped: 1024000 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:04.506082+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:05 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 72409088 unmapped: 1015808 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 611235 data_alloc: 218103808 data_used: 122880
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:05.506199+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 67 sent 65 num 2 unsent 2 sending 2
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:41:35.088859+0000 osd.1 (osd.1) 66 : cluster [DBG] 5.1a scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:41:35.102973+0000 osd.1 (osd.1) 67 : cluster [DBG] 5.1a scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 76 heartbeat osd_stat(store_statfs(0x4fcacd000/0x0/0x4ffc00000, data 0xdbf97/0x150000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 72441856 unmapped: 983040 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 67) v1
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:41:35.088859+0000 osd.1 (osd.1) 66 : cluster [DBG] 5.1a scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:41:35.102973+0000 osd.1 (osd.1) 67 : cluster [DBG] 5.1a scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 76 handle_osd_map epochs [76,77], i have 76, src has [1,77]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:06.506389+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 72450048 unmapped: 974848 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:07.506525+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 72458240 unmapped: 966656 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 77 pg[6.d(unlocked)] enter Initial
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 77 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=60/60 les/c/f=61/61/0 sis=77) [1] r=0 lpr=0 pi=[60,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000033 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 77 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=60/60 les/c/f=61/61/0 sis=77) [1] r=0 lpr=0 pi=[60,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 77 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=60/60 les/c/f=61/61/0 sis=77) [1] r=0 lpr=77 pi=[60,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000016
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 77 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=60/60 les/c/f=61/61/0 sis=77) [1] r=0 lpr=77 pi=[60,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 77 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=60/60 les/c/f=61/61/0 sis=77) [1] r=0 lpr=77 pi=[60,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 77 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=60/60 les/c/f=61/61/0 sis=77) [1] r=0 lpr=77 pi=[60,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 77 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=60/60 les/c/f=61/61/0 sis=77) [1] r=0 lpr=77 pi=[60,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 77 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=60/60 les/c/f=61/61/0 sis=77) [1] r=0 lpr=77 pi=[60,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 77 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=60/60 les/c/f=61/61/0 sis=77) [1] r=0 lpr=77 pi=[60,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 77 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=60/60 les/c/f=61/61/0 sis=77) [1] r=0 lpr=77 pi=[60,77)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 77 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=60/60 les/c/f=61/61/0 sis=77) [1] r=0 lpr=77 pi=[60,77)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000108 1 0.000034
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 77 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=60/60 les/c/f=61/61/0 sis=77) [1] r=0 lpr=77 pi=[60,77)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 77 pg[6.d( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=60/61 n=1 ec=43/21 lis/c=60/60 les/c/f=61/61/0 sis=77) [1] r=0 lpr=77 pi=[60,77)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetLog 0.000534 2 0.000056
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 77 pg[6.d( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=60/61 n=1 ec=43/21 lis/c=60/60 les/c/f=61/61/0 sis=77) [1] r=0 lpr=77 pi=[60,77)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 77 pg[6.d( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=60/61 n=1 ec=43/21 lis/c=60/60 les/c/f=61/61/0 sis=77) [1] r=0 lpr=77 pi=[60,77)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetMissing 0.000015 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 77 pg[6.d( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=60/61 n=1 ec=43/21 lis/c=60/60 les/c/f=61/61/0 sis=77) [1] r=0 lpr=77 pi=[60,77)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 77 handle_osd_map epochs [78,78], i have 77, src has [1,78]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:08.506680+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 72474624 unmapped: 950272 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 78 handle_osd_map epochs [77,79], i have 78, src has [1,79]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.006110191s of 10.050721169s, submitted: 35
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 78 handle_osd_map epochs [79,79], i have 79, src has [1,79]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 79 pg[6.d( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=60/61 n=1 ec=43/21 lis/c=60/60 les/c/f=61/61/0 sis=77) [1] r=0 lpr=77 pi=[60,77)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.975809 6 0.000120
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 79 pg[6.d( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=60/61 n=1 ec=43/21 lis/c=60/60 les/c/f=61/61/0 sis=77) [1] r=0 lpr=77 pi=[60,77)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering 0.976538 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 79 pg[6.d( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=60/61 n=1 ec=43/21 lis/c=60/60 les/c/f=61/61/0 sis=77) [1] r=0 lpr=77 pi=[60,77)/1 crt=37'39 mlcod 0'0 unknown m=2 mbc={}] enter Started/Primary/Active
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 79 pg[6.d( v 37'39 lc 33'10 (0'0,37'39] local-lis/les=77/79 n=1 ec=43/21 lis/c=60/60 les/c/f=61/61/0 sis=77) [1] r=0 lpr=77 pi=[60,77)/1 crt=37'39 lcod 0'0 mlcod 0'0 activating+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 79 pg[6.d( v 37'39 lc 33'10 (0'0,37'39] local-lis/les=77/79 n=1 ec=43/21 lis/c=60/60 les/c/f=61/61/0 sis=77) [1] r=0 lpr=77 pi=[60,77)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 79 pg[6.d( v 37'39 lc 33'10 (0'0,37'39] local-lis/les=77/79 n=1 ec=43/21 lis/c=77/60 les/c/f=79/61/0 sis=77) [1] r=0 lpr=77 pi=[60,77)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.001601 4 0.000105
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 79 pg[6.d( v 37'39 lc 33'10 (0'0,37'39] local-lis/les=77/79 n=1 ec=43/21 lis/c=77/60 les/c/f=79/61/0 sis=77) [1] r=0 lpr=77 pi=[60,77)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 79 pg[6.d( v 37'39 lc 33'10 (0'0,37'39] local-lis/les=77/79 n=1 ec=43/21 lis/c=77/60 les/c/f=79/61/0 sis=77) [1] r=0 lpr=77 pi=[60,77)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000049 1 0.000034
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 79 pg[6.d( v 37'39 lc 33'10 (0'0,37'39] local-lis/les=77/79 n=1 ec=43/21 lis/c=77/60 les/c/f=79/61/0 sis=77) [1] r=0 lpr=77 pi=[60,77)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 79 pg[6.d( v 37'39 lc 33'10 (0'0,37'39] local-lis/les=77/79 n=1 ec=43/21 lis/c=77/60 les/c/f=79/61/0 sis=77) [1] r=0 lpr=77 pi=[60,77)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000004 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 79 pg[6.d( v 37'39 lc 33'10 (0'0,37'39] local-lis/les=77/79 n=1 ec=43/21 lis/c=77/60 les/c/f=79/61/0 sis=77) [1] r=0 lpr=77 pi=[60,77)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 79 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=77/79 n=1 ec=43/21 lis/c=77/60 les/c/f=79/61/0 sis=77) [1] r=0 lpr=77 pi=[60,77)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.066970 2 0.000032
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 79 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=77/79 n=1 ec=43/21 lis/c=77/60 les/c/f=79/61/0 sis=77) [1] r=0 lpr=77 pi=[60,77)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 79 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=77/79 n=1 ec=43/21 lis/c=77/60 les/c/f=79/61/0 sis=77) [1] r=0 lpr=77 pi=[60,77)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 79 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=77/79 n=1 ec=43/21 lis/c=77/60 les/c/f=79/61/0 sis=77) [1] r=0 lpr=77 pi=[60,77)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:09.506792+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:05 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 876544 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 629215 data_alloc: 218103808 data_used: 131072
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:10.506896+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 72556544 unmapped: 868352 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 5.18 deep-scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 5.18 deep-scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:11.507018+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 69 sent 67 num 2 unsent 2 sending 2
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:41:41.261007+0000 osd.1 (osd.1) 68 : cluster [DBG] 5.18 deep-scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:41:41.275089+0000 osd.1 (osd.1) 69 : cluster [DBG] 5.18 deep-scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 79 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xe1440/0x15a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 72556544 unmapped: 868352 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 69) v1
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:41:41.261007+0000 osd.1 (osd.1) 68 : cluster [DBG] 5.18 deep-scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:41:41.275089+0000 osd.1 (osd.1) 69 : cluster [DBG] 5.18 deep-scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:12.507170+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 72556544 unmapped: 868352 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:13.507261+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 72572928 unmapped: 851968 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:14.507361+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:05 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 72572928 unmapped: 851968 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 628635 data_alloc: 218103808 data_used: 131072
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:15.507464+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 72581120 unmapped: 843776 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 79 handle_osd_map epochs [80,80], i have 79, src has [1,80]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:16.507591+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 72581120 unmapped: 843776 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 80 heartbeat osd_stat(store_statfs(0x4fcac0000/0x0/0x4ffc00000, data 0xe3158/0x15d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 80 handle_osd_map epochs [81,81], i have 80, src has [1,81]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 80 handle_osd_map epochs [81,81], i have 81, src has [1,81]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:17.507684+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 71 sent 69 num 2 unsent 2 sending 2
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:41:47.335013+0000 osd.1 (osd.1) 70 : cluster [DBG] 5.19 scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:41:47.349263+0000 osd.1 (osd.1) 71 : cluster [DBG] 5.19 scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 72597504 unmapped: 827392 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 71) v1
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:41:47.335013+0000 osd.1 (osd.1) 70 : cluster [DBG] 5.19 scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:41:47.349263+0000 osd.1 (osd.1) 71 : cluster [DBG] 5.19 scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:18.507875+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 73 sent 71 num 2 unsent 2 sending 2
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:41:48.351065+0000 osd.1 (osd.1) 72 : cluster [DBG] 4.5 scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:41:48.364861+0000 osd.1 (osd.1) 73 : cluster [DBG] 4.5 scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 72597504 unmapped: 827392 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 73) v1
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:41:48.351065+0000 osd.1 (osd.1) 72 : cluster [DBG] 4.5 scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:41:48.364861+0000 osd.1 (osd.1) 73 : cluster [DBG] 4.5 scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:19.508038+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:05 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 72605696 unmapped: 819200 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 638554 data_alloc: 218103808 data_used: 151552
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 81 heartbeat osd_stat(store_statfs(0x4fcabd000/0x0/0x4ffc00000, data 0xe4cdd/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 81 handle_osd_map epochs [82,82], i have 81, src has [1,82]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.583781242s of 10.609751701s, submitted: 28
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:20.508750+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 729088 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 82 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xe685a/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 82 handle_osd_map epochs [83,83], i have 82, src has [1,83]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:21.508847+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 72736768 unmapped: 688128 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:22.508970+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 83 heartbeat osd_stat(store_statfs(0x4fcab6000/0x0/0x4ffc00000, data 0xe83d7/0x166000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 72736768 unmapped: 688128 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:23.509072+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 83 heartbeat osd_stat(store_statfs(0x4fcab6000/0x0/0x4ffc00000, data 0xe83d7/0x166000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 83 handle_osd_map epochs [84,84], i have 83, src has [1,84]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 83 handle_osd_map epochs [84,84], i have 84, src has [1,84]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 72744960 unmapped: 679936 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:24.509180+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:05 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 72769536 unmapped: 655360 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 648814 data_alloc: 218103808 data_used: 151552
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:25.509292+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 75 sent 73 num 2 unsent 2 sending 2
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:41:55.467653+0000 osd.1 (osd.1) 74 : cluster [DBG] 4.7 scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:41:55.481669+0000 osd.1 (osd.1) 75 : cluster [DBG] 4.7 scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 72769536 unmapped: 655360 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 84 handle_osd_map epochs [85,86], i have 84, src has [1,86]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 75) v1
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:41:55.467653+0000 osd.1 (osd.1) 74 : cluster [DBG] 4.7 scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:41:55.481669+0000 osd.1 (osd.1) 75 : cluster [DBG] 4.7 scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 4.9 deep-scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 4.9 deep-scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:26.509410+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 77 sent 75 num 2 unsent 2 sending 2
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:41:56.477956+0000 osd.1 (osd.1) 76 : cluster [DBG] 4.9 deep-scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:41:56.492120+0000 osd.1 (osd.1) 77 : cluster [DBG] 4.9 deep-scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 638976 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 4.4 deep-scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 4.4 deep-scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 77) v1
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:41:56.477956+0000 osd.1 (osd.1) 76 : cluster [DBG] 4.9 deep-scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:41:56.492120+0000 osd.1 (osd.1) 77 : cluster [DBG] 4.9 deep-scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:27.509552+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 79 sent 77 num 2 unsent 2 sending 2
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:41:57.438783+0000 osd.1 (osd.1) 78 : cluster [DBG] 4.4 deep-scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:41:57.452914+0000 osd.1 (osd.1) 79 : cluster [DBG] 4.4 deep-scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 638976 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 86 handle_osd_map epochs [87,88], i have 86, src has [1,88]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 88 pg[9.15(unlocked)] enter Initial
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 88 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=88) [1] r=0 lpr=0 pi=[53,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000065 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 88 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=88) [1] r=0 lpr=0 pi=[53,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 88 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=88) [1] r=0 lpr=88 pi=[53,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000026 1 0.000050
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 88 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=88) [1] r=0 lpr=88 pi=[53,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 88 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=88) [1] r=0 lpr=88 pi=[53,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 88 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=88) [1] r=0 lpr=88 pi=[53,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 88 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=88) [1] r=0 lpr=88 pi=[53,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000086 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 88 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=88) [1] r=0 lpr=88 pi=[53,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 88 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=88) [1] r=0 lpr=88 pi=[53,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 88 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=88) [1] r=0 lpr=88 pi=[53,88)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 88 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=88) [1] r=0 lpr=88 pi=[53,88)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000133 1 0.000200
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 88 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=88) [1] r=0 lpr=88 pi=[53,88)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 88 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=88) [1] r=0 lpr=88 pi=[53,88)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000035 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 88 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=88) [1] r=0 lpr=88 pi=[53,88)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000242 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 88 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=88) [1] r=0 lpr=88 pi=[53,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 79) v1
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:41:57.438783+0000 osd.1 (osd.1) 78 : cluster [DBG] 4.4 deep-scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:41:57.452914+0000 osd.1 (osd.1) 79 : cluster [DBG] 4.4 deep-scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 88 handle_osd_map epochs [88,89], i have 88, src has [1,89]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 88 handle_osd_map epochs [88,89], i have 89, src has [1,89]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 89 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=88) [1] r=0 lpr=88 pi=[53,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.593364 2 0.000146
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 89 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=88) [1] r=0 lpr=88 pi=[53,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.593685 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 89 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=88) [1] r=0 lpr=88 pi=[53,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.593812 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 89 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=88) [1] r=0 lpr=88 pi=[53,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 89 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=89) [1]/[0] r=-1 lpr=89 pi=[53,89)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 89 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=89) [1]/[0] r=-1 lpr=89 pi=[53,89)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000066 1 0.000102
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 89 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=89) [1]/[0] r=-1 lpr=89 pi=[53,89)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 89 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=89) [1]/[0] r=-1 lpr=89 pi=[53,89)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 89 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=89) [1]/[0] r=-1 lpr=89 pi=[53,89)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 89 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=89) [1]/[0] r=-1 lpr=89 pi=[53,89)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 89 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=89) [1]/[0] r=-1 lpr=89 pi=[53,89)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 89 handle_osd_map epochs [89,89], i have 89, src has [1,89]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:28.509770+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 73842688 unmapped: 630784 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 89 heartbeat osd_stat(store_statfs(0x4fcaa7000/0x0/0x4ffc00000, data 0xf0b24/0x175000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:29.509878+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 81 sent 79 num 2 unsent 2 sending 2
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:41:59.484442+0000 osd.1 (osd.1) 80 : cluster [DBG] 4.8 scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:41:59.498540+0000 osd.1 (osd.1) 81 : cluster [DBG] 4.8 scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:05 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 73850880 unmapped: 622592 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667392 data_alloc: 218103808 data_used: 151552
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _renew_subs
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 89 handle_osd_map epochs [90,90], i have 89, src has [1,90]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.656233788s of 10.688500404s, submitted: 58
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 81) v1
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:41:59.484442+0000 osd.1 (osd.1) 80 : cluster [DBG] 4.8 scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:41:59.498540+0000 osd.1 (osd.1) 81 : cluster [DBG] 4.8 scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 90 pg[9.15( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=89) [1]/[0] r=-1 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=4 mbc={}] exit Started/Stray 2.006308 5 0.000052
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 90 pg[9.15( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=89) [1]/[0] r=-1 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 90 pg[9.15( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=89) [1]/[0] r=-1 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 90 pg[9.15( v 44'389 lc 39'143 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=89/53 les/c/f=90/54/0 sis=89) [1]/[0] r=-1 lpr=89 pi=[53,89)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.002281 4 0.000130
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 90 pg[9.15( v 44'389 lc 39'143 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=89/53 les/c/f=90/54/0 sis=89) [1]/[0] r=-1 lpr=89 pi=[53,89)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 90 pg[9.15( v 44'389 lc 39'143 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=89/53 les/c/f=90/54/0 sis=89) [1]/[0] r=-1 lpr=89 pi=[53,89)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000049 1 0.000031
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 90 pg[9.15( v 44'389 lc 39'143 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=89/53 les/c/f=90/54/0 sis=89) [1]/[0] r=-1 lpr=89 pi=[53,89)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:30.510112+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 83 sent 81 num 2 unsent 2 sending 2
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:42:00.490277+0000 osd.1 (osd.1) 82 : cluster [DBG] 4.2 scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:42:00.507844+0000 osd.1 (osd.1) 83 : cluster [DBG] 4.2 scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 90 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=89/53 les/c/f=90/54/0 sis=89) [1]/[0] r=-1 lpr=89 pi=[53,89)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.036337 1 0.000031
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 90 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=89/53 les/c/f=90/54/0 sis=89) [1]/[0] r=-1 lpr=89 pi=[53,89)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 598016 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 4.d scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 4.d scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 90 handle_osd_map epochs [91,91], i have 90, src has [1,91]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 83) v1
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:42:00.490277+0000 osd.1 (osd.1) 82 : cluster [DBG] 4.2 scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:42:00.507844+0000 osd.1 (osd.1) 83 : cluster [DBG] 4.2 scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 91 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=89/53 les/c/f=90/54/0 sis=89) [1]/[0] r=-1 lpr=89 pi=[53,89)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.971340 1 0.000055
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 91 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=89/53 les/c/f=90/54/0 sis=89) [1]/[0] r=-1 lpr=89 pi=[53,89)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.010117 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 91 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=89/53 les/c/f=90/54/0 sis=89) [1]/[0] r=-1 lpr=89 pi=[53,89)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 3.016462 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 91 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=89/53 les/c/f=90/54/0 sis=89) [1]/[0] r=-1 lpr=89 pi=[53,89)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 91 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=89/53 les/c/f=90/54/0 sis=91) [1] r=0 lpr=91 pi=[53,91)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 91 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=89/53 les/c/f=90/54/0 sis=91) [1] r=0 lpr=91 pi=[53,91)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000053 1 0.000079
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 91 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=89/53 les/c/f=90/54/0 sis=91) [1] r=0 lpr=91 pi=[53,91)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 91 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=89/53 les/c/f=90/54/0 sis=91) [1] r=0 lpr=91 pi=[53,91)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 91 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=89/53 les/c/f=90/54/0 sis=91) [1] r=0 lpr=91 pi=[53,91)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 91 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=89/53 les/c/f=90/54/0 sis=91) [1] r=0 lpr=91 pi=[53,91)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 91 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=89/53 les/c/f=90/54/0 sis=91) [1] r=0 lpr=91 pi=[53,91)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 91 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=89/53 les/c/f=90/54/0 sis=91) [1] r=0 lpr=91 pi=[53,91)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 91 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=89/53 les/c/f=90/54/0 sis=91) [1] r=0 lpr=91 pi=[53,91)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 91 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=89/53 les/c/f=90/54/0 sis=91) [1] r=0 lpr=91 pi=[53,91)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000023 1 0.000028
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 91 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=89/53 les/c/f=90/54/0 sis=91) [1] r=0 lpr=91 pi=[53,91)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: merge_log_dups log.dups.size()=0olog.dups.size()=9
Nov 26 11:59:05 compute-0 ceph-osd[89074]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=9
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 91 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=89/90 n=5 ec=45/34 lis/c=89/53 les/c/f=90/54/0 sis=91) [1] r=0 lpr=91 pi=[53,91)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001100 3 0.000031
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 91 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=89/90 n=5 ec=45/34 lis/c=89/53 les/c/f=90/54/0 sis=91) [1] r=0 lpr=91 pi=[53,91)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 91 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=89/90 n=5 ec=45/34 lis/c=89/53 les/c/f=90/54/0 sis=91) [1] r=0 lpr=91 pi=[53,91)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000046 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 91 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=89/90 n=5 ec=45/34 lis/c=89/53 les/c/f=90/54/0 sis=91) [1] r=0 lpr=91 pi=[53,91)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:31.510213+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 85 sent 83 num 2 unsent 2 sending 2
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:42:01.484401+0000 osd.1 (osd.1) 84 : cluster [DBG] 4.d scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:42:01.498444+0000 osd.1 (osd.1) 85 : cluster [DBG] 4.d scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 589824 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 91 handle_osd_map epochs [91,92], i have 91, src has [1,92]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 91 heartbeat osd_stat(store_statfs(0x4fca9d000/0x0/0x4ffc00000, data 0xf5d2f/0x17f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 91 handle_osd_map epochs [92,92], i have 92, src has [1,92]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 85) v1
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:42:01.484401+0000 osd.1 (osd.1) 84 : cluster [DBG] 4.d scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:42:01.498444+0000 osd.1 (osd.1) 85 : cluster [DBG] 4.d scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 92 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=89/90 n=5 ec=45/34 lis/c=89/53 les/c/f=90/54/0 sis=91) [1] r=0 lpr=91 pi=[53,91)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.001088 2 0.000096
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 92 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=89/90 n=5 ec=45/34 lis/c=89/53 les/c/f=90/54/0 sis=91) [1] r=0 lpr=91 pi=[53,91)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.002316 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 92 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=89/90 n=5 ec=45/34 lis/c=89/53 les/c/f=90/54/0 sis=91) [1] r=0 lpr=91 pi=[53,91)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 92 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=91/92 n=5 ec=45/34 lis/c=89/53 les/c/f=90/54/0 sis=91) [1] r=0 lpr=91 pi=[53,91)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 92 handle_osd_map epochs [92,92], i have 92, src has [1,92]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:32.510317+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 92 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=91/92 n=5 ec=45/34 lis/c=89/53 les/c/f=90/54/0 sis=91) [1] r=0 lpr=91 pi=[53,91)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 92 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=91/92 n=5 ec=45/34 lis/c=91/53 les/c/f=92/54/0 sis=91) [1] r=0 lpr=91 pi=[53,91)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006932 4 0.000151
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 92 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=91/92 n=5 ec=45/34 lis/c=91/53 les/c/f=92/54/0 sis=91) [1] r=0 lpr=91 pi=[53,91)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 92 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=91/92 n=5 ec=45/34 lis/c=91/53 les/c/f=92/54/0 sis=91) [1] r=0 lpr=91 pi=[53,91)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 92 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=91/92 n=5 ec=45/34 lis/c=91/53 les/c/f=92/54/0 sis=91) [1] r=0 lpr=91 pi=[53,91)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 524288 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:33.510406+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 516096 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:34.510504+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 92 heartbeat osd_stat(store_statfs(0x4fca9b000/0x0/0x4ffc00000, data 0xf779c/0x182000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 92 handle_osd_map epochs [93,94], i have 92, src has [1,94]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 92 handle_osd_map epochs [93,94], i have 94, src has [1,94]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:05 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 1515520 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693141 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 4.f deep-scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:35.510589+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  log_queue is 1 last_log 86 sent 85 num 1 unsent 1 sending 1
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:42:05.498029+0000 osd.1 (osd.1) 86 : cluster [DBG] 4.f deep-scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 4.f deep-scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 86) v1
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:42:05.498029+0000 osd.1 (osd.1) 86 : cluster [DBG] 4.f deep-scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 1515520 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:36.510675+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  log_queue is 1 last_log 87 sent 86 num 1 unsent 1 sending 1
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:42:05.512137+0000 osd.1 (osd.1) 87 : cluster [DBG] 4.f deep-scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 87) v1
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:42:05.512137+0000 osd.1 (osd.1) 87 : cluster [DBG] 4.f deep-scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 94 handle_osd_map epochs [95,95], i have 94, src has [1,95]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 1515520 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:37.510798+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 95 heartbeat osd_stat(store_statfs(0x4fca92000/0x0/0x4ffc00000, data 0xfc663/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 95 handle_osd_map epochs [96,96], i have 95, src has [1,96]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 450560 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:38.510915+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 450560 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 96 heartbeat osd_stat(store_statfs(0x4fca8e000/0x0/0x4ffc00000, data 0xfe1e0/0x18e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:39.511007+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 89 sent 87 num 2 unsent 2 sending 2
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:42:09.471793+0000 osd.1 (osd.1) 88 : cluster [DBG] 4.10 scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:42:09.485907+0000 osd.1 (osd.1) 89 : cluster [DBG] 4.10 scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 89) v1
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:42:09.471793+0000 osd.1 (osd.1) 88 : cluster [DBG] 4.10 scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:42:09.485907+0000 osd.1 (osd.1) 89 : cluster [DBG] 4.10 scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:05 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 450560 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 699780 data_alloc: 218103808 data_used: 172032
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:40.511116+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 91 sent 89 num 2 unsent 2 sending 2
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:42:10.458111+0000 osd.1 (osd.1) 90 : cluster [DBG] 4.12 scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:42:10.472216+0000 osd.1 (osd.1) 91 : cluster [DBG] 4.12 scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 91) v1
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:42:10.458111+0000 osd.1 (osd.1) 90 : cluster [DBG] 4.12 scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:42:10.472216+0000 osd.1 (osd.1) 91 : cluster [DBG] 4.12 scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75079680 unmapped: 442368 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 96 heartbeat osd_stat(store_statfs(0x4fca90000/0x0/0x4ffc00000, data 0xfe1e0/0x18e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 96 handle_osd_map epochs [97,97], i have 96, src has [1,97]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.554465294s of 10.614696503s, submitted: 43
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:41.511260+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 97 handle_osd_map epochs [97,98], i have 97, src has [1,98]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 98 heartbeat osd_stat(store_statfs(0x4fca8c000/0x0/0x4ffc00000, data 0xffd5d/0x191000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75087872 unmapped: 434176 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:42.511364+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75087872 unmapped: 434176 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 4.14 deep-scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 4.14 deep-scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:43.511460+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 93 sent 91 num 2 unsent 2 sending 2
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:42:13.431985+0000 osd.1 (osd.1) 92 : cluster [DBG] 4.14 deep-scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:42:13.445787+0000 osd.1 (osd.1) 93 : cluster [DBG] 4.14 deep-scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 93) v1
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:42:13.431985+0000 osd.1 (osd.1) 92 : cluster [DBG] 4.14 deep-scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:42:13.445787+0000 osd.1 (osd.1) 93 : cluster [DBG] 4.14 deep-scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75104256 unmapped: 417792 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:44.511606+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:05 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75104256 unmapped: 417792 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 709892 data_alloc: 218103808 data_used: 180224
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 98 handle_osd_map epochs [99,101], i have 98, src has [1,101]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:45.511749+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75104256 unmapped: 417792 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 101 handle_osd_map epochs [102,102], i have 101, src has [1,102]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:46.511853+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 102 handle_osd_map epochs [102,103], i have 102, src has [1,103]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75112448 unmapped: 409600 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 103 heartbeat osd_stat(store_statfs(0x4fca78000/0x0/0x4ffc00000, data 0x109f0f/0x1a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:47.511981+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 103 handle_osd_map epochs [103,104], i have 103, src has [1,104]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75186176 unmapped: 335872 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:48.512120+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 104 handle_osd_map epochs [104,105], i have 104, src has [1,105]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75202560 unmapped: 319488 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:49.512223+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:05 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75202560 unmapped: 319488 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 731188 data_alloc: 218103808 data_used: 192512
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 105 heartbeat osd_stat(store_statfs(0x4fca73000/0x0/0x4ffc00000, data 0x10d3a3/0x1a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 105 handle_osd_map epochs [106,106], i have 105, src has [1,106]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 105 handle_osd_map epochs [106,106], i have 106, src has [1,106]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:50.512323+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75210752 unmapped: 311296 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:51.512435+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.514575958s of 10.540659904s, submitted: 17
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 696320 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:52.512536+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 95 sent 93 num 2 unsent 2 sending 2
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:42:21.643847+0000 osd.1 (osd.1) 94 : cluster [DBG] 7.7 scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:42:21.658001+0000 osd.1 (osd.1) 95 : cluster [DBG] 7.7 scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 95) v1
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:42:21.643847+0000 osd.1 (osd.1) 94 : cluster [DBG] 7.7 scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:42:21.658001+0000 osd.1 (osd.1) 95 : cluster [DBG] 7.7 scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 7.b scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 7.b scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 696320 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 106 handle_osd_map epochs [107,107], i have 106, src has [1,107]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:53.512760+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 97 sent 95 num 2 unsent 2 sending 2
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:42:22.635104+0000 osd.1 (osd.1) 96 : cluster [DBG] 7.b scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:42:22.649039+0000 osd.1 (osd.1) 97 : cluster [DBG] 7.b scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 7.d deep-scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 7.d deep-scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 107 handle_osd_map epochs [108,108], i have 107, src has [1,108]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 688128 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 97) v1
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:42:22.635104+0000 osd.1 (osd.1) 96 : cluster [DBG] 7.b scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:42:22.649039+0000 osd.1 (osd.1) 97 : cluster [DBG] 7.b scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 108 handle_osd_map epochs [109,109], i have 108, src has [1,109]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 109 pg[9.1f(unlocked)] enter Initial
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 109 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=109) [1] r=0 lpr=0 pi=[66,109)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000034 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 109 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=109) [1] r=0 lpr=0 pi=[66,109)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 109 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=109) [1] r=0 lpr=109 pi=[66,109)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000019 1 0.000029
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 109 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=109) [1] r=0 lpr=109 pi=[66,109)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 109 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=109) [1] r=0 lpr=109 pi=[66,109)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 109 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=109) [1] r=0 lpr=109 pi=[66,109)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 109 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=109) [1] r=0 lpr=109 pi=[66,109)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 109 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=109) [1] r=0 lpr=109 pi=[66,109)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 109 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=109) [1] r=0 lpr=109 pi=[66,109)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 109 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=109) [1] r=0 lpr=109 pi=[66,109)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 109 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=109) [1] r=0 lpr=109 pi=[66,109)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000078 1 0.000036
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 109 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=109) [1] r=0 lpr=109 pi=[66,109)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 109 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=109) [1] r=0 lpr=109 pi=[66,109)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000022 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 109 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=109) [1] r=0 lpr=109 pi=[66,109)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000118 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 109 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=109) [1] r=0 lpr=109 pi=[66,109)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:54.512885+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 99 sent 97 num 2 unsent 2 sending 2
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:42:23.621288+0000 osd.1 (osd.1) 98 : cluster [DBG] 7.d deep-scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:42:23.635254+0000 osd.1 (osd.1) 99 : cluster [DBG] 7.d deep-scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:05 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 74842112 unmapped: 679936 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 749859 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 109 handle_osd_map epochs [109,110], i have 109, src has [1,110]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 109 handle_osd_map epochs [109,110], i have 110, src has [1,110]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 110 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=109) [1] r=0 lpr=109 pi=[66,109)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.001079 2 0.000046
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 110 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=109) [1] r=0 lpr=109 pi=[66,109)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.001225 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 110 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=109) [1] r=0 lpr=109 pi=[66,109)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.001245 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 110 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=109) [1] r=0 lpr=109 pi=[66,109)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 110 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=110) [1]/[2] r=-1 lpr=110 pi=[66,110)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 110 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=110) [1]/[2] r=-1 lpr=110 pi=[66,110)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000068 1 0.000101
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 110 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=110) [1]/[2] r=-1 lpr=110 pi=[66,110)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 110 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=110) [1]/[2] r=-1 lpr=110 pi=[66,110)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 110 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=110) [1]/[2] r=-1 lpr=110 pi=[66,110)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 110 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=110) [1]/[2] r=-1 lpr=110 pi=[66,110)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 110 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=110) [1]/[2] r=-1 lpr=110 pi=[66,110)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 99) v1
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:42:23.621288+0000 osd.1 (osd.1) 98 : cluster [DBG] 7.d deep-scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:42:23.635254+0000 osd.1 (osd.1) 99 : cluster [DBG] 7.d deep-scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 110 handle_osd_map epochs [110,110], i have 110, src has [1,110]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:55.513013+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 101 sent 99 num 2 unsent 2 sending 2
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:42:24.627681+0000 osd.1 (osd.1) 100 : cluster [DBG] 7.10 scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:42:24.641536+0000 osd.1 (osd.1) 101 : cluster [DBG] 7.10 scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 110 heartbeat osd_stat(store_statfs(0x4fca62000/0x0/0x4ffc00000, data 0x115b2a/0x1b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 622592 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 101) v1
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:42:24.627681+0000 osd.1 (osd.1) 100 : cluster [DBG] 7.10 scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:42:24.641536+0000 osd.1 (osd.1) 101 : cluster [DBG] 7.10 scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:56.513141+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 103 sent 101 num 2 unsent 2 sending 2
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:42:25.612672+0000 osd.1 (osd.1) 102 : cluster [DBG] 7.12 scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:42:25.626753+0000 osd.1 (osd.1) 103 : cluster [DBG] 7.12 scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _renew_subs
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 110 handle_osd_map epochs [111,111], i have 110, src has [1,111]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 111 pg[9.1f( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=110) [1]/[2] r=-1 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.562743 5 0.000061
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 111 pg[9.1f( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=110) [1]/[2] r=-1 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 111 pg[9.1f( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=110) [1]/[2] r=-1 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: not registered w/ OSD
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 111 pg[9.1f( v 44'389 lc 39'177 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=110/66 les/c/f=111/67/0 sis=110) [1]/[2] r=-1 lpr=110 pi=[66,110)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.002399 4 0.000159
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 111 pg[9.1f( v 44'389 lc 39'177 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=110/66 les/c/f=111/67/0 sis=110) [1]/[2] r=-1 lpr=110 pi=[66,110)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 111 pg[9.1f( v 44'389 lc 39'177 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=110/66 les/c/f=111/67/0 sis=110) [1]/[2] r=-1 lpr=110 pi=[66,110)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000041 1 0.000067
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 111 pg[9.1f( v 44'389 lc 39'177 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=110/66 les/c/f=111/67/0 sis=110) [1]/[2] r=-1 lpr=110 pi=[66,110)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 111 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=110/66 les/c/f=111/67/0 sis=110) [1]/[2] r=-1 lpr=110 pi=[66,110)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.035749 1 0.000039
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 111 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=110/66 les/c/f=111/67/0 sis=110) [1]/[2] r=-1 lpr=110 pi=[66,110)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 532480 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 103) v1
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:42:25.612672+0000 osd.1 (osd.1) 102 : cluster [DBG] 7.12 scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:42:25.626753+0000 osd.1 (osd.1) 103 : cluster [DBG] 7.12 scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 111 handle_osd_map epochs [111,112], i have 111, src has [1,112]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=110/66 les/c/f=111/67/0 sis=110) [1]/[2] r=-1 lpr=110 pi=[66,110)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.742962 1 0.000051
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=110/66 les/c/f=111/67/0 sis=110) [1]/[2] r=-1 lpr=110 pi=[66,110)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.781274 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=110/66 les/c/f=111/67/0 sis=110) [1]/[2] r=-1 lpr=110 pi=[66,110)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 2.344073 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=110/66 les/c/f=111/67/0 sis=110) [1]/[2] r=-1 lpr=110 pi=[66,110)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=110/66 les/c/f=111/67/0 sis=112) [1] r=0 lpr=112 pi=[66,112)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=110/66 les/c/f=111/67/0 sis=112) [1] r=0 lpr=112 pi=[66,112)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000091 1 0.000125
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=110/66 les/c/f=111/67/0 sis=112) [1] r=0 lpr=112 pi=[66,112)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=110/66 les/c/f=111/67/0 sis=112) [1] r=0 lpr=112 pi=[66,112)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=110/66 les/c/f=111/67/0 sis=112) [1] r=0 lpr=112 pi=[66,112)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=110/66 les/c/f=111/67/0 sis=112) [1] r=0 lpr=112 pi=[66,112)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=110/66 les/c/f=111/67/0 sis=112) [1] r=0 lpr=112 pi=[66,112)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=110/66 les/c/f=111/67/0 sis=112) [1] r=0 lpr=112 pi=[66,112)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=110/66 les/c/f=111/67/0 sis=112) [1] r=0 lpr=112 pi=[66,112)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=110/66 les/c/f=111/67/0 sis=112) [1] r=0 lpr=112 pi=[66,112)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.002752 2 0.000032
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=110/66 les/c/f=111/67/0 sis=112) [1] r=0 lpr=112 pi=[66,112)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 112 handle_osd_map epochs [112,112], i have 112, src has [1,112]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: merge_log_dups log.dups.size()=0olog.dups.size()=11
Nov 26 11:59:05 compute-0 ceph-osd[89074]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=11
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/66 les/c/f=111/67/0 sis=112) [1] r=0 lpr=112 pi=[66,112)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000376 2 0.000087
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/66 les/c/f=111/67/0 sis=112) [1] r=0 lpr=112 pi=[66,112)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/66 les/c/f=111/67/0 sis=112) [1] r=0 lpr=112 pi=[66,112)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/66 les/c/f=111/67/0 sis=112) [1] r=0 lpr=112 pi=[66,112)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:57.513293+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 532480 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 112 handle_osd_map epochs [112,113], i have 112, src has [1,113]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 112 handle_osd_map epochs [113,113], i have 113, src has [1,113]
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 113 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/66 les/c/f=111/67/0 sis=112) [1] r=0 lpr=112 pi=[66,112)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.003810 2 0.000059
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 113 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/66 les/c/f=111/67/0 sis=112) [1] r=0 lpr=112 pi=[66,112)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.007176 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 113 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/66 les/c/f=111/67/0 sis=112) [1] r=0 lpr=112 pi=[66,112)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 113 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=110/66 les/c/f=111/67/0 sis=112) [1] r=0 lpr=112 pi=[66,112)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 113 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=110/66 les/c/f=111/67/0 sis=112) [1] r=0 lpr=112 pi=[66,112)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 113 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=112/66 les/c/f=113/67/0 sis=112) [1] r=0 lpr=112 pi=[66,112)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001582 4 0.000571
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 113 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=112/66 les/c/f=113/67/0 sis=112) [1] r=0 lpr=112 pi=[66,112)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 113 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=112/66 les/c/f=113/67/0 sis=112) [1] r=0 lpr=112 pi=[66,112)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 pg_epoch: 113 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=112/66 les/c/f=113/67/0 sis=112) [1] r=0 lpr=112 pi=[66,112)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:58.513420+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca59000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 516096 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:59.513545+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 105 sent 103 num 2 unsent 2 sending 2
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:42:28.582931+0000 osd.1 (osd.1) 104 : cluster [DBG] 7.14 scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:42:28.597025+0000 osd.1 (osd.1) 105 : cluster [DBG] 7.14 scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:05 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 499712 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 771940 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 105) v1
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:42:28.582931+0000 osd.1 (osd.1) 104 : cluster [DBG] 7.14 scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:42:28.597025+0000 osd.1 (osd.1) 105 : cluster [DBG] 7.14 scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:00.513695+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 491520 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:01.513843+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.000967026s of 10.040258408s, submitted: 39
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75038720 unmapped: 483328 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:02.513966+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 107 sent 105 num 2 unsent 2 sending 2
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:42:31.684166+0000 osd.1 (osd.1) 106 : cluster [DBG] 7.16 scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:42:31.698232+0000 osd.1 (osd.1) 107 : cluster [DBG] 7.16 scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75038720 unmapped: 483328 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 107) v1
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:42:31.684166+0000 osd.1 (osd.1) 106 : cluster [DBG] 7.16 scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:42:31.698232+0000 osd.1 (osd.1) 107 : cluster [DBG] 7.16 scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:03.514116+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 109 sent 107 num 2 unsent 2 sending 2
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:42:32.646907+0000 osd.1 (osd.1) 108 : cluster [DBG] 7.17 scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:42:32.661048+0000 osd.1 (osd.1) 109 : cluster [DBG] 7.17 scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 466944 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 109) v1
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:42:32.646907+0000 osd.1 (osd.1) 108 : cluster [DBG] 7.17 scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:42:32.661048+0000 osd.1 (osd.1) 109 : cluster [DBG] 7.17 scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:04.514256+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 111 sent 109 num 2 unsent 2 sending 2
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:42:33.630233+0000 osd.1 (osd.1) 110 : cluster [DBG] 7.19 scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:42:33.644337+0000 osd.1 (osd.1) 111 : cluster [DBG] 7.19 scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:05 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 466944 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 774132 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 111) v1
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:42:33.630233+0000 osd.1 (osd.1) 110 : cluster [DBG] 7.19 scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:42:33.644337+0000 osd.1 (osd.1) 111 : cluster [DBG] 7.19 scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:05.514401+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 113 sent 111 num 2 unsent 2 sending 2
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:42:34.654817+0000 osd.1 (osd.1) 112 : cluster [DBG] 7.1d scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:42:34.668951+0000 osd.1 (osd.1) 113 : cluster [DBG] 7.1d scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 458752 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 113) v1
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:42:34.654817+0000 osd.1 (osd.1) 112 : cluster [DBG] 7.1d scrub starts
Nov 26 11:59:05 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:42:34.668951+0000 osd.1 (osd.1) 113 : cluster [DBG] 7.1d scrub ok
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:06.514547+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 458752 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:07.514698+0000)
Nov 26 11:59:05 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 450560 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:05 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:08.514826+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 450560 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:09.514945+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 450560 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 775280 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:10.515046+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 115 sent 113 num 2 unsent 2 sending 2
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:42:39.755879+0000 osd.1 (osd.1) 114 : cluster [DBG] 7.1e scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:42:39.770033+0000 osd.1 (osd.1) 115 : cluster [DBG] 7.1e scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75087872 unmapped: 434176 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 115) v1
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:42:39.755879+0000 osd.1 (osd.1) 114 : cluster [DBG] 7.1e scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:42:39.770033+0000 osd.1 (osd.1) 115 : cluster [DBG] 7.1e scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:11.515179+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 117 sent 115 num 2 unsent 2 sending 2
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:42:40.774329+0000 osd.1 (osd.1) 116 : cluster [DBG] 8.1 scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:42:40.788235+0000 osd.1 (osd.1) 117 : cluster [DBG] 8.1 scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.100564957s of 10.119820595s, submitted: 12
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75087872 unmapped: 434176 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 117) v1
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:42:40.774329+0000 osd.1 (osd.1) 116 : cluster [DBG] 8.1 scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:42:40.788235+0000 osd.1 (osd.1) 117 : cluster [DBG] 8.1 scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:12.515305+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 119 sent 117 num 2 unsent 2 sending 2
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:42:41.803889+0000 osd.1 (osd.1) 118 : cluster [DBG] 8.3 scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:42:41.818121+0000 osd.1 (osd.1) 119 : cluster [DBG] 8.3 scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 425984 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 119) v1
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:42:41.803889+0000 osd.1 (osd.1) 118 : cluster [DBG] 8.3 scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:42:41.818121+0000 osd.1 (osd.1) 119 : cluster [DBG] 8.3 scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:13.515436+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 425984 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:14.515533+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 425984 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 777574 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:15.515698+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75104256 unmapped: 417792 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:16.515792+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75104256 unmapped: 417792 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:17.515886+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75112448 unmapped: 409600 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:18.515996+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75120640 unmapped: 401408 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:19.516088+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 121 sent 119 num 2 unsent 2 sending 2
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:42:48.712048+0000 osd.1 (osd.1) 120 : cluster [DBG] 8.5 scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:42:48.726163+0000 osd.1 (osd.1) 121 : cluster [DBG] 8.5 scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 121) v1
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:42:48.712048+0000 osd.1 (osd.1) 120 : cluster [DBG] 8.5 scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:42:48.726163+0000 osd.1 (osd.1) 121 : cluster [DBG] 8.5 scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75128832 unmapped: 393216 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 778721 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:20.516220+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75128832 unmapped: 393216 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:21.516336+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75137024 unmapped: 385024 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:22.516512+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75137024 unmapped: 385024 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:23.516611+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75137024 unmapped: 385024 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:24.516740+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75145216 unmapped: 376832 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 778721 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:25.516840+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.983625412s of 13.988371849s, submitted: 4
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75145216 unmapped: 376832 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:26.516985+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 123 sent 121 num 2 unsent 2 sending 2
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:42:55.792361+0000 osd.1 (osd.1) 122 : cluster [DBG] 8.7 scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:42:55.806526+0000 osd.1 (osd.1) 123 : cluster [DBG] 8.7 scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 123) v1
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:42:55.792361+0000 osd.1 (osd.1) 122 : cluster [DBG] 8.7 scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:42:55.806526+0000 osd.1 (osd.1) 123 : cluster [DBG] 8.7 scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75153408 unmapped: 368640 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:27.517150+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75153408 unmapped: 368640 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:28.517288+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75153408 unmapped: 368640 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:29.517394+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 125 sent 123 num 2 unsent 2 sending 2
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:42:58.752330+0000 osd.1 (osd.1) 124 : cluster [DBG] 8.8 scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:42:58.766518+0000 osd.1 (osd.1) 125 : cluster [DBG] 8.8 scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 125) v1
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:42:58.752330+0000 osd.1 (osd.1) 124 : cluster [DBG] 8.8 scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:42:58.766518+0000 osd.1 (osd.1) 125 : cluster [DBG] 8.8 scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75186176 unmapped: 335872 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 781015 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:30.517574+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 9.2 deep-scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 9.2 deep-scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75186176 unmapped: 335872 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:31.517675+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 127 sent 125 num 2 unsent 2 sending 2
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:43:00.738081+0000 osd.1 (osd.1) 126 : cluster [DBG] 9.2 deep-scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:43:00.769881+0000 osd.1 (osd.1) 127 : cluster [DBG] 9.2 deep-scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 127) v1
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:43:00.738081+0000 osd.1 (osd.1) 126 : cluster [DBG] 9.2 deep-scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:43:00.769881+0000 osd.1 (osd.1) 127 : cluster [DBG] 9.2 deep-scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75194368 unmapped: 327680 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:32.517806+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75194368 unmapped: 327680 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:33.517896+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 8.a scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 8.a scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75202560 unmapped: 319488 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:34.518001+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 129 sent 127 num 2 unsent 2 sending 2
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:43:03.872881+0000 osd.1 (osd.1) 128 : cluster [DBG] 8.a scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:43:03.887043+0000 osd.1 (osd.1) 129 : cluster [DBG] 8.a scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 129) v1
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:43:03.872881+0000 osd.1 (osd.1) 128 : cluster [DBG] 8.a scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:43:03.887043+0000 osd.1 (osd.1) 129 : cluster [DBG] 8.a scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75210752 unmapped: 311296 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783309 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:35.518159+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 8.13 deep-scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.009399414s of 10.021649361s, submitted: 8
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 8.13 deep-scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75210752 unmapped: 311296 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:36.518282+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 131 sent 129 num 2 unsent 2 sending 2
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:43:05.814363+0000 osd.1 (osd.1) 130 : cluster [DBG] 8.13 deep-scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:43:05.828111+0000 osd.1 (osd.1) 131 : cluster [DBG] 8.13 deep-scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 131) v1
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:43:05.814363+0000 osd.1 (osd.1) 130 : cluster [DBG] 8.13 deep-scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:43:05.828111+0000 osd.1 (osd.1) 131 : cluster [DBG] 8.13 deep-scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75218944 unmapped: 303104 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:37.518463+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75218944 unmapped: 303104 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:38.518708+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75218944 unmapped: 303104 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:39.518811+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75227136 unmapped: 294912 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784457 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:40.518916+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75227136 unmapped: 294912 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:41.519038+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75235328 unmapped: 286720 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:42.519133+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75235328 unmapped: 286720 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:43.519230+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75235328 unmapped: 286720 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:44.519332+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75243520 unmapped: 278528 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784457 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:45.519435+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75243520 unmapped: 278528 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:46.519538+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 133 sent 131 num 2 unsent 2 sending 2
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:43:15.724546+0000 osd.1 (osd.1) 132 : cluster [DBG] 8.16 scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:43:15.738731+0000 osd.1 (osd.1) 133 : cluster [DBG] 8.16 scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 133) v1
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:43:15.724546+0000 osd.1 (osd.1) 132 : cluster [DBG] 8.16 scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:43:15.738731+0000 osd.1 (osd.1) 133 : cluster [DBG] 8.16 scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 270336 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:47.519717+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 270336 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:48.519848+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75259904 unmapped: 262144 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:49.519984+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 9.4 deep-scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.841174126s of 13.846213341s, submitted: 4
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 9.4 deep-scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75259904 unmapped: 262144 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 786752 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:50.520087+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 135 sent 133 num 2 unsent 2 sending 2
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:43:19.660206+0000 osd.1 (osd.1) 134 : cluster [DBG] 9.4 deep-scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:43:19.710028+0000 osd.1 (osd.1) 135 : cluster [DBG] 9.4 deep-scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 135) v1
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:43:19.660206+0000 osd.1 (osd.1) 134 : cluster [DBG] 9.4 deep-scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:43:19.710028+0000 osd.1 (osd.1) 135 : cluster [DBG] 9.4 deep-scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75259904 unmapped: 262144 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:51.520226+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75169792 unmapped: 352256 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:52.520312+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 8.17 deep-scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 8.17 deep-scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75177984 unmapped: 344064 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:53.520439+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 137 sent 135 num 2 unsent 2 sending 2
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:43:22.688626+0000 osd.1 (osd.1) 136 : cluster [DBG] 8.17 deep-scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:43:22.702803+0000 osd.1 (osd.1) 137 : cluster [DBG] 8.17 deep-scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 137) v1
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:43:22.688626+0000 osd.1 (osd.1) 136 : cluster [DBG] 8.17 deep-scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:43:22.702803+0000 osd.1 (osd.1) 137 : cluster [DBG] 8.17 deep-scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75177984 unmapped: 344064 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:54.520579+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75186176 unmapped: 335872 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 787900 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:55.520674+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 8.19 deep-scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 8.19 deep-scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75186176 unmapped: 335872 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:56.520766+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 139 sent 137 num 2 unsent 2 sending 2
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:43:25.716707+0000 osd.1 (osd.1) 138 : cluster [DBG] 8.19 deep-scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:43:25.730719+0000 osd.1 (osd.1) 139 : cluster [DBG] 8.19 deep-scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 139) v1
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:43:25.716707+0000 osd.1 (osd.1) 138 : cluster [DBG] 8.19 deep-scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:43:25.730719+0000 osd.1 (osd.1) 139 : cluster [DBG] 8.19 deep-scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75194368 unmapped: 327680 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:57.520913+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 141 sent 139 num 2 unsent 2 sending 2
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:43:26.759797+0000 osd.1 (osd.1) 140 : cluster [DBG] 8.1e scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:43:26.773894+0000 osd.1 (osd.1) 141 : cluster [DBG] 8.1e scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 141) v1
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:43:26.759797+0000 osd.1 (osd.1) 140 : cluster [DBG] 8.1e scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:43:26.773894+0000 osd.1 (osd.1) 141 : cluster [DBG] 8.1e scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75194368 unmapped: 327680 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:58.521078+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75202560 unmapped: 319488 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:59.521199+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 9.a scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.108485222s of 10.121310234s, submitted: 8
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 9.a scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75210752 unmapped: 311296 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 791343 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:00.521302+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 143 sent 141 num 2 unsent 2 sending 2
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:43:29.782159+0000 osd.1 (osd.1) 142 : cluster [DBG] 9.a scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:43:29.827394+0000 osd.1 (osd.1) 143 : cluster [DBG] 9.a scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 143) v1
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:43:29.782159+0000 osd.1 (osd.1) 142 : cluster [DBG] 9.a scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:43:29.827394+0000 osd.1 (osd.1) 143 : cluster [DBG] 9.a scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 9.10 deep-scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 9.10 deep-scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75218944 unmapped: 303104 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:01.521446+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 145 sent 143 num 2 unsent 2 sending 2
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:43:30.745015+0000 osd.1 (osd.1) 144 : cluster [DBG] 9.10 deep-scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:43:30.766212+0000 osd.1 (osd.1) 145 : cluster [DBG] 9.10 deep-scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 145) v1
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:43:30.745015+0000 osd.1 (osd.1) 144 : cluster [DBG] 9.10 deep-scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:43:30.766212+0000 osd.1 (osd.1) 145 : cluster [DBG] 9.10 deep-scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75227136 unmapped: 294912 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:02.521604+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75227136 unmapped: 294912 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:03.521749+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75235328 unmapped: 286720 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:04.521911+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 9.12 deep-scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 9.12 deep-scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75235328 unmapped: 286720 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 793639 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:05.522045+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 147 sent 145 num 2 unsent 2 sending 2
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:43:34.703980+0000 osd.1 (osd.1) 146 : cluster [DBG] 9.12 deep-scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:43:34.732261+0000 osd.1 (osd.1) 147 : cluster [DBG] 9.12 deep-scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 147) v1
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:43:34.703980+0000 osd.1 (osd.1) 146 : cluster [DBG] 9.12 deep-scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:43:34.732261+0000 osd.1 (osd.1) 147 : cluster [DBG] 9.12 deep-scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75235328 unmapped: 286720 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:06.522193+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 149 sent 147 num 2 unsent 2 sending 2
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:43:35.661492+0000 osd.1 (osd.1) 148 : cluster [DBG] 9.14 scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:43:35.693218+0000 osd.1 (osd.1) 149 : cluster [DBG] 9.14 scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 149) v1
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:43:35.661492+0000 osd.1 (osd.1) 148 : cluster [DBG] 9.14 scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:43:35.693218+0000 osd.1 (osd.1) 149 : cluster [DBG] 9.14 scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75243520 unmapped: 278528 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:07.522367+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 151 sent 149 num 2 unsent 2 sending 2
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:43:36.708791+0000 osd.1 (osd.1) 150 : cluster [DBG] 9.1a scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:43:36.737091+0000 osd.1 (osd.1) 151 : cluster [DBG] 9.1a scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 11.5 deep-scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 151) v1
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:43:36.708791+0000 osd.1 (osd.1) 150 : cluster [DBG] 9.1a scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:43:36.737091+0000 osd.1 (osd.1) 151 : cluster [DBG] 9.1a scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 11.5 deep-scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75243520 unmapped: 278528 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:08.522482+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 153 sent 151 num 2 unsent 2 sending 2
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:43:37.700491+0000 osd.1 (osd.1) 152 : cluster [DBG] 11.5 deep-scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:43:37.714615+0000 osd.1 (osd.1) 153 : cluster [DBG] 11.5 deep-scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 153) v1
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:43:37.700491+0000 osd.1 (osd.1) 152 : cluster [DBG] 11.5 deep-scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:43:37.714615+0000 osd.1 (osd.1) 153 : cluster [DBG] 11.5 deep-scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 270336 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:09.522629+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 270336 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 797083 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:10.522769+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75259904 unmapped: 262144 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:11.522894+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 270336 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:12.523030+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 270336 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:13.523162+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75259904 unmapped: 262144 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:14.523298+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.904716492s of 14.919413567s, submitted: 12
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75259904 unmapped: 262144 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 798231 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:15.523431+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 155 sent 153 num 2 unsent 2 sending 2
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:43:44.701072+0000 osd.1 (osd.1) 154 : cluster [DBG] 11.7 scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:43:44.715102+0000 osd.1 (osd.1) 155 : cluster [DBG] 11.7 scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 11.a scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 11.a scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 155) v1
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:43:44.701072+0000 osd.1 (osd.1) 154 : cluster [DBG] 11.7 scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:43:44.715102+0000 osd.1 (osd.1) 155 : cluster [DBG] 11.7 scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75268096 unmapped: 253952 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:16.523571+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 157 sent 155 num 2 unsent 2 sending 2
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:43:45.711575+0000 osd.1 (osd.1) 156 : cluster [DBG] 11.a scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:43:45.725673+0000 osd.1 (osd.1) 157 : cluster [DBG] 11.a scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 157) v1
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:43:45.711575+0000 osd.1 (osd.1) 156 : cluster [DBG] 11.a scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:43:45.725673+0000 osd.1 (osd.1) 157 : cluster [DBG] 11.a scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75268096 unmapped: 253952 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:17.524450+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 11.c scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 11.c scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 237568 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:18.524568+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 159 sent 157 num 2 unsent 2 sending 2
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:43:47.674532+0000 osd.1 (osd.1) 158 : cluster [DBG] 11.c scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:43:47.688683+0000 osd.1 (osd.1) 159 : cluster [DBG] 11.c scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 159) v1
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:43:47.674532+0000 osd.1 (osd.1) 158 : cluster [DBG] 11.c scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:43:47.688683+0000 osd.1 (osd.1) 159 : cluster [DBG] 11.c scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 221184 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:19.524698+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 11.13 scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 11.13 scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 221184 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 801676 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:20.524793+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 161 sent 159 num 2 unsent 2 sending 2
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:43:49.686222+0000 osd.1 (osd.1) 160 : cluster [DBG] 11.13 scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:43:49.700470+0000 osd.1 (osd.1) 161 : cluster [DBG] 11.13 scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 161) v1
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:43:49.686222+0000 osd.1 (osd.1) 160 : cluster [DBG] 11.13 scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:43:49.700470+0000 osd.1 (osd.1) 161 : cluster [DBG] 11.13 scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 212992 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:21.524925+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 212992 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:22.525017+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 163 sent 161 num 2 unsent 2 sending 2
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:43:51.744050+0000 osd.1 (osd.1) 162 : cluster [DBG] 11.16 scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:43:51.758202+0000 osd.1 (osd.1) 163 : cluster [DBG] 11.16 scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 163) v1
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:43:51.744050+0000 osd.1 (osd.1) 162 : cluster [DBG] 11.16 scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:43:51.758202+0000 osd.1 (osd.1) 163 : cluster [DBG] 11.16 scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75317248 unmapped: 204800 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:23.525149+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 11.1d deep-scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 11.1d deep-scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75325440 unmapped: 196608 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:24.525246+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 165 sent 163 num 2 unsent 2 sending 2
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:43:53.735133+0000 osd.1 (osd.1) 164 : cluster [DBG] 11.1d deep-scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:43:53.749254+0000 osd.1 (osd.1) 165 : cluster [DBG] 11.1d deep-scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 165) v1
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:43:53.735133+0000 osd.1 (osd.1) 164 : cluster [DBG] 11.1d deep-scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:43:53.749254+0000 osd.1 (osd.1) 165 : cluster [DBG] 11.1d deep-scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75325440 unmapped: 196608 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 803974 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:25.525381+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75333632 unmapped: 188416 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:26.525486+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.041402817s of 12.060150146s, submitted: 12
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75333632 unmapped: 188416 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:27.525580+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 167 sent 165 num 2 unsent 2 sending 2
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:43:56.761146+0000 osd.1 (osd.1) 166 : cluster [DBG] 6.1 scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:43:56.775299+0000 osd.1 (osd.1) 167 : cluster [DBG] 6.1 scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 167) v1
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:43:56.761146+0000 osd.1 (osd.1) 166 : cluster [DBG] 6.1 scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:43:56.775299+0000 osd.1 (osd.1) 167 : cluster [DBG] 6.1 scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75333632 unmapped: 188416 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:28.525784+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 180224 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:29.525875+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 180224 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 805121 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:30.525970+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75350016 unmapped: 172032 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:31.526107+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75350016 unmapped: 172032 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:32.526400+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 169 sent 167 num 2 unsent 2 sending 2
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:44:01.871651+0000 osd.1 (osd.1) 168 : cluster [DBG] 10.19 scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:44:01.885693+0000 osd.1 (osd.1) 169 : cluster [DBG] 10.19 scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 169) v1
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:44:01.871651+0000 osd.1 (osd.1) 168 : cluster [DBG] 10.19 scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:44:01.885693+0000 osd.1 (osd.1) 169 : cluster [DBG] 10.19 scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 163840 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:33.526528+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 163840 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:34.526644+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 10.b scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 10.b scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 163840 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 807418 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:35.526751+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 171 sent 169 num 2 unsent 2 sending 2
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:44:04.863947+0000 osd.1 (osd.1) 170 : cluster [DBG] 10.b scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:44:04.877474+0000 osd.1 (osd.1) 171 : cluster [DBG] 10.b scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 171) v1
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:44:04.863947+0000 osd.1 (osd.1) 170 : cluster [DBG] 10.b scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:44:04.877474+0000 osd.1 (osd.1) 171 : cluster [DBG] 10.b scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 10.13 deep-scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 10.13 deep-scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75366400 unmapped: 1204224 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:36.526871+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 173 sent 171 num 2 unsent 2 sending 2
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:44:05.906762+0000 osd.1 (osd.1) 172 : cluster [DBG] 10.13 deep-scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:44:05.920906+0000 osd.1 (osd.1) 173 : cluster [DBG] 10.13 deep-scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 173) v1
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:44:05.906762+0000 osd.1 (osd.1) 172 : cluster [DBG] 10.13 deep-scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:44:05.920906+0000 osd.1 (osd.1) 173 : cluster [DBG] 10.13 deep-scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75366400 unmapped: 1204224 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:37.527011+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75366400 unmapped: 1204224 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:38.527122+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.209887505s of 12.218240738s, submitted: 8
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 1196032 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:39.527244+0000)
Nov 26 11:59:06 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Nov 26 11:59:06 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/448173909' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 175 sent 173 num 2 unsent 2 sending 2
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:44:08.979433+0000 osd.1 (osd.1) 174 : cluster [DBG] 10.12 scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:44:08.993486+0000 osd.1 (osd.1) 175 : cluster [DBG] 10.12 scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 175) v1
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:44:08.979433+0000 osd.1 (osd.1) 174 : cluster [DBG] 10.12 scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:44:08.993486+0000 osd.1 (osd.1) 175 : cluster [DBG] 10.12 scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1187840 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 809716 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:40.527415+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 1179648 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:41.527521+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 177 sent 175 num 2 unsent 2 sending 2
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:44:10.959204+0000 osd.1 (osd.1) 176 : cluster [DBG] 10.11 scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:44:10.973320+0000 osd.1 (osd.1) 177 : cluster [DBG] 10.11 scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 177) v1
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:44:10.959204+0000 osd.1 (osd.1) 176 : cluster [DBG] 10.11 scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:44:10.973320+0000 osd.1 (osd.1) 177 : cluster [DBG] 10.11 scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 1179648 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:42.527653+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1171456 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:43.527806+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 179 sent 177 num 2 unsent 2 sending 2
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:44:12.888558+0000 osd.1 (osd.1) 178 : cluster [DBG] 10.10 scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:44:12.902670+0000 osd.1 (osd.1) 179 : cluster [DBG] 10.10 scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 179) v1
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:44:12.888558+0000 osd.1 (osd.1) 178 : cluster [DBG] 10.10 scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:44:12.902670+0000 osd.1 (osd.1) 179 : cluster [DBG] 10.10 scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1171456 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:44.527988+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1171456 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 813163 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:45.528175+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 181 sent 179 num 2 unsent 2 sending 2
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:44:14.925242+0000 osd.1 (osd.1) 180 : cluster [DBG] 10.1a scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:44:14.939339+0000 osd.1 (osd.1) 181 : cluster [DBG] 10.1a scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 1163264 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 181) v1
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:44:14.925242+0000 osd.1 (osd.1) 180 : cluster [DBG] 10.1a scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:44:14.939339+0000 osd.1 (osd.1) 181 : cluster [DBG] 10.1a scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:46.528391+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 183 sent 181 num 2 unsent 2 sending 2
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:44:15.875698+0000 osd.1 (osd.1) 182 : cluster [DBG] 10.6 scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:44:15.889845+0000 osd.1 (osd.1) 183 : cluster [DBG] 10.6 scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 1155072 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 183) v1
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:44:15.875698+0000 osd.1 (osd.1) 182 : cluster [DBG] 10.6 scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:44:15.889845+0000 osd.1 (osd.1) 183 : cluster [DBG] 10.6 scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:47.528614+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 10.f scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 10.f scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1138688 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:48.528747+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 185 sent 183 num 2 unsent 2 sending 2
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:44:17.856446+0000 osd.1 (osd.1) 184 : cluster [DBG] 10.f scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:44:17.870551+0000 osd.1 (osd.1) 185 : cluster [DBG] 10.f scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1138688 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 185) v1
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:44:17.856446+0000 osd.1 (osd.1) 184 : cluster [DBG] 10.f scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:44:17.870551+0000 osd.1 (osd.1) 185 : cluster [DBG] 10.f scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:49.528880+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 187 sent 185 num 2 unsent 2 sending 2
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:44:18.828849+0000 osd.1 (osd.1) 186 : cluster [DBG] 10.2 scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:44:18.842898+0000 osd.1 (osd.1) 187 : cluster [DBG] 10.2 scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.827135086s of 10.842758179s, submitted: 14
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 1130496 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817756 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 187) v1
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:44:18.828849+0000 osd.1 (osd.1) 186 : cluster [DBG] 10.2 scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:44:18.842898+0000 osd.1 (osd.1) 187 : cluster [DBG] 10.2 scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:50.529033+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 189 sent 187 num 2 unsent 2 sending 2
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:44:19.822173+0000 osd.1 (osd.1) 188 : cluster [DBG] 10.14 scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:44:19.839878+0000 osd.1 (osd.1) 189 : cluster [DBG] 10.14 scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 1130496 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:51.529228+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 189) v1
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:44:19.822173+0000 osd.1 (osd.1) 188 : cluster [DBG] 10.14 scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:44:19.839878+0000 osd.1 (osd.1) 189 : cluster [DBG] 10.14 scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 1130496 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:52.529351+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 6.e scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 6.e scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1122304 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:53.529459+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 191 sent 189 num 2 unsent 2 sending 2
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:44:22.852824+0000 osd.1 (osd.1) 190 : cluster [DBG] 6.e scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:44:22.870516+0000 osd.1 (osd.1) 191 : cluster [DBG] 6.e scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 191) v1
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:44:22.852824+0000 osd.1 (osd.1) 190 : cluster [DBG] 6.e scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:44:22.870516+0000 osd.1 (osd.1) 191 : cluster [DBG] 6.e scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1122304 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:54.529677+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1122304 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 818903 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:55.529826+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1114112 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:56.530032+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 193 sent 191 num 2 unsent 2 sending 2
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:44:25.808723+0000 osd.1 (osd.1) 192 : cluster [DBG] 6.6 scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:44:25.826542+0000 osd.1 (osd.1) 193 : cluster [DBG] 6.6 scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 193) v1
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:44:25.808723+0000 osd.1 (osd.1) 192 : cluster [DBG] 6.6 scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:44:25.826542+0000 osd.1 (osd.1) 193 : cluster [DBG] 6.6 scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1114112 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:57.530206+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1105920 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:58.530364+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1105920 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:59.530475+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1105920 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 820050 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:00.530580+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 1089536 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:01.530677+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 1089536 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:02.530776+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75489280 unmapped: 1081344 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:03.530868+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75497472 unmapped: 1073152 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:04.530996+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75505664 unmapped: 1064960 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 820050 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:05.531088+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.877879143s of 15.884715080s, submitted: 6
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75505664 unmapped: 1064960 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:06.531177+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 195 sent 193 num 2 unsent 2 sending 2
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:44:35.706917+0000 osd.1 (osd.1) 194 : cluster [DBG] 6.2 scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:44:35.721049+0000 osd.1 (osd.1) 195 : cluster [DBG] 6.2 scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 195) v1
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:44:35.706917+0000 osd.1 (osd.1) 194 : cluster [DBG] 6.2 scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:44:35.721049+0000 osd.1 (osd.1) 195 : cluster [DBG] 6.2 scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75505664 unmapped: 1064960 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:07.531344+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 1056768 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:08.531500+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 1056768 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:09.531590+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 1056768 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 821197 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:10.531747+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  log_queue is 1 last_log 196 sent 195 num 1 unsent 1 sending 1
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:44:40.527816+0000 osd.1 (osd.1) 196 : cluster [DBG] 6.4 scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 196) v1
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:44:40.527816+0000 osd.1 (osd.1) 196 : cluster [DBG] 6.4 scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 1040384 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 6.c deep-scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:11.531877+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 198 sent 196 num 2 unsent 2 sending 2
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:44:40.555935+0000 osd.1 (osd.1) 197 : cluster [DBG] 6.4 scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:44:41.514226+0000 osd.1 (osd.1) 198 : cluster [DBG] 6.c deep-scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 6.c deep-scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 198) v1
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:44:40.555935+0000 osd.1 (osd.1) 197 : cluster [DBG] 6.4 scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:44:41.514226+0000 osd.1 (osd.1) 198 : cluster [DBG] 6.c deep-scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 1040384 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:12.531991+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  log_queue is 1 last_log 199 sent 198 num 1 unsent 1 sending 1
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:44:41.531905+0000 osd.1 (osd.1) 199 : cluster [DBG] 6.c deep-scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 199) v1
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:44:41.531905+0000 osd.1 (osd.1) 199 : cluster [DBG] 6.c deep-scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75538432 unmapped: 1032192 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:13.532119+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75538432 unmapped: 1032192 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:14.532215+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 1024000 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 823491 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:15.532345+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 1024000 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:16.532458+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 1024000 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 6.b scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.715986252s of 11.722782135s, submitted: 6
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 6.b scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:17.532589+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 201 sent 199 num 2 unsent 2 sending 2
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:44:47.429700+0000 osd.1 (osd.1) 200 : cluster [DBG] 6.b scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:44:47.447370+0000 osd.1 (osd.1) 201 : cluster [DBG] 6.b scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 201) v1
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:44:47.429700+0000 osd.1 (osd.1) 200 : cluster [DBG] 6.b scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:44:47.447370+0000 osd.1 (osd.1) 201 : cluster [DBG] 6.b scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 1007616 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 6.d scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 6.d scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:18.532772+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 203 sent 201 num 2 unsent 2 sending 2
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:44:48.383809+0000 osd.1 (osd.1) 202 : cluster [DBG] 6.d scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:44:48.405003+0000 osd.1 (osd.1) 203 : cluster [DBG] 6.d scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 203) v1
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:44:48.383809+0000 osd.1 (osd.1) 202 : cluster [DBG] 6.d scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:44:48.405003+0000 osd.1 (osd.1) 203 : cluster [DBG] 6.d scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75571200 unmapped: 999424 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:19.532928+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75579392 unmapped: 991232 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 825785 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:20.533026+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75579392 unmapped: 991232 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:21.533146+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 205 sent 203 num 2 unsent 2 sending 2
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:44:51.365919+0000 osd.1 (osd.1) 204 : cluster [DBG] 9.15 scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:44:51.394190+0000 osd.1 (osd.1) 205 : cluster [DBG] 9.15 scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 205) v1
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:44:51.365919+0000 osd.1 (osd.1) 204 : cluster [DBG] 9.15 scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:44:51.394190+0000 osd.1 (osd.1) 205 : cluster [DBG] 9.15 scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75587584 unmapped: 983040 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:22.533284+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75587584 unmapped: 983040 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:23.533383+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  log_queue is 2 last_log 207 sent 205 num 2 unsent 2 sending 2
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:44:53.304983+0000 osd.1 (osd.1) 206 : cluster [DBG] 9.1f scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  will send 2025-11-26T11:44:53.336781+0000 osd.1 (osd.1) 207 : cluster [DBG] 9.1f scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client handle_log_ack log(last 207) v1
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:44:53.304983+0000 osd.1 (osd.1) 206 : cluster [DBG] 9.1f scrub starts
Nov 26 11:59:06 compute-0 ceph-osd[89074]: log_client  logged 2025-11-26T11:44:53.336781+0000 osd.1 (osd.1) 207 : cluster [DBG] 9.1f scrub ok
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75587584 unmapped: 983040 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:24.533532+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 974848 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:25.533644+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 974848 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:26.533758+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 974848 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:27.533856+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75603968 unmapped: 966656 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:28.533966+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75603968 unmapped: 966656 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:29.534076+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75612160 unmapped: 958464 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:30.534204+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75612160 unmapped: 958464 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:31.534321+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75620352 unmapped: 950272 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:32.534457+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75620352 unmapped: 950272 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:33.534557+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75620352 unmapped: 950272 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:34.534671+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75628544 unmapped: 942080 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:35.534771+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75628544 unmapped: 942080 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:36.534861+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75636736 unmapped: 933888 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:37.534958+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75636736 unmapped: 933888 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:38.535066+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75644928 unmapped: 925696 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:39.535167+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75644928 unmapped: 925696 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:40.535284+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75644928 unmapped: 925696 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:41.535413+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75653120 unmapped: 917504 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:42.535555+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75653120 unmapped: 917504 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:43.535670+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75661312 unmapped: 909312 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:44.535827+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75661312 unmapped: 909312 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:45.535936+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75661312 unmapped: 909312 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:46.536025+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75669504 unmapped: 901120 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:47.536164+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75669504 unmapped: 901120 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:48.536278+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 892928 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:49.536404+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75685888 unmapped: 884736 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:50.536550+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75685888 unmapped: 884736 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:51.536695+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 876544 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:52.536823+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 876544 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:53.536920+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75702272 unmapped: 868352 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:54.537015+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75702272 unmapped: 868352 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:55.537119+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75702272 unmapped: 868352 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:56.537250+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75710464 unmapped: 860160 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:57.537361+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75710464 unmapped: 860160 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:58.537479+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 851968 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:59.537593+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 851968 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:00.537722+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75726848 unmapped: 843776 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:01.537823+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75726848 unmapped: 843776 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:02.537935+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75726848 unmapped: 843776 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:03.538040+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 835584 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:04.538154+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 835584 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:05.538269+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 827392 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:06.538383+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 827392 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:07.540236+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 827392 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:08.540351+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 819200 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:09.540451+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 819200 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:10.540552+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 811008 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:11.540670+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 819200 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:12.540784+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 819200 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:13.540887+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 811008 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:14.540986+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 811008 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:15.541132+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 802816 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:16.541233+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 802816 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:17.541337+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 802816 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:18.541475+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75776000 unmapped: 794624 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:19.541573+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75776000 unmapped: 794624 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:20.541681+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75784192 unmapped: 786432 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:21.541789+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75784192 unmapped: 786432 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:22.541894+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 778240 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:23.541996+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 778240 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:24.542128+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 778240 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:25.542225+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 770048 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:26.542323+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 770048 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:27.542428+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 761856 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:28.542535+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 761856 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:29.542631+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 761856 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:30.542747+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 761856 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:31.542849+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 761856 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:32.542962+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 761856 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:33.543058+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 753664 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:34.543178+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 745472 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:35.543287+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 745472 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:36.543387+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 745472 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:37.543477+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 737280 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:38.543596+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 737280 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:39.543699+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 729088 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:40.543794+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 729088 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:41.543887+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 729088 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:42.543981+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 720896 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:43.544080+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 712704 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:44.544168+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 712704 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:45.544270+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 704512 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:46.544363+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 704512 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:47.544469+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 696320 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:48.544609+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 696320 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:49.544685+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 688128 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:50.544836+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 688128 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:51.544929+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 688128 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:52.545020+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 679936 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:53.545152+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 679936 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:54.545253+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 679936 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:55.545358+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 671744 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:56.545464+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 671744 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:57.545561+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 663552 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:58.545676+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 663552 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:59.545770+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 663552 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:00.545860+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75915264 unmapped: 655360 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:01.545990+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75915264 unmapped: 655360 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:02.546080+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75923456 unmapped: 647168 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:03.546167+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75923456 unmapped: 647168 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:04.546259+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75923456 unmapped: 647168 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:05.546360+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 638976 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:06.546460+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 638976 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:07.546553+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 630784 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:08.546670+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 630784 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:09.546767+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 630784 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:10.546865+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 622592 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:11.546965+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 622592 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:12.547058+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 622592 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:13.547162+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 622592 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:14.547260+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 622592 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:15.547354+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 614400 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:16.547447+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 614400 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:17.547541+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 606208 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:18.547675+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 589824 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:19.547798+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 589824 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:20.547893+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 581632 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:21.548008+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 581632 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:22.548106+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 573440 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:23.548229+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 573440 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:24.548314+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 573440 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:25.548373+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 565248 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:26.548469+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 565248 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:27.548564+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 557056 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:28.548679+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 557056 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:29.548775+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 548864 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:30.548878+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 548864 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:31.548975+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 548864 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:32.549073+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 540672 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:33.549169+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 540672 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:34.549275+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 540672 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:35.549367+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 532480 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:36.549491+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 532480 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:37.549602+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76046336 unmapped: 524288 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:38.549678+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76046336 unmapped: 524288 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:39.549779+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76046336 unmapped: 524288 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:40.549880+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 516096 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:41.549984+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 516096 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:42.550094+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 516096 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:43.550213+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 507904 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:44.550320+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 507904 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:45.550420+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76070912 unmapped: 499712 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:46.550531+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76070912 unmapped: 499712 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:47.550653+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76070912 unmapped: 499712 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:48.550764+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 491520 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:49.550861+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 491520 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:50.550961+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76087296 unmapped: 483328 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:51.551066+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76087296 unmapped: 483328 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:52.551156+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76087296 unmapped: 483328 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:53.551251+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76095488 unmapped: 475136 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:54.551355+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76095488 unmapped: 475136 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:55.551453+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76095488 unmapped: 475136 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:56.551555+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76103680 unmapped: 466944 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:57.551670+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76103680 unmapped: 466944 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:58.551776+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 548864 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:59.551867+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 548864 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:00.551966+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 540672 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:01.552097+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 540672 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:02.552227+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 540672 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:03.552354+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 532480 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:04.552449+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 532480 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:05.552579+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 532480 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:06.552710+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76046336 unmapped: 524288 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:07.552798+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76046336 unmapped: 524288 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:08.552925+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 507904 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:09.553024+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 507904 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:10.553118+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76070912 unmapped: 499712 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:11.553224+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76070912 unmapped: 499712 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:12.553539+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76070912 unmapped: 499712 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:13.553703+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 491520 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:14.553867+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 491520 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:15.553976+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76087296 unmapped: 483328 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:16.554133+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76087296 unmapped: 483328 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:17.554244+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76087296 unmapped: 483328 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:18.554363+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76095488 unmapped: 475136 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:19.554434+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76095488 unmapped: 475136 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:20.554537+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76095488 unmapped: 475136 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:21.554655+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76103680 unmapped: 466944 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:22.554808+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76103680 unmapped: 466944 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:23.554933+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76103680 unmapped: 466944 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:24.555047+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 458752 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:25.555348+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 458752 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:26.555477+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 450560 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:27.555607+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 450560 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:28.555749+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 450560 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:29.555888+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76128256 unmapped: 442368 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:30.555985+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76128256 unmapped: 442368 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:31.556095+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 434176 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:32.557353+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 434176 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:33.557509+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 425984 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:34.557709+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 425984 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:35.557859+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 417792 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:36.557997+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:37.558196+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 417792 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:38.558356+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 417792 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:39.558498+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 409600 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:40.558652+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 409600 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:41.558785+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 401408 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:42.558921+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 401408 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:43.559057+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 393216 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:44.559207+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 393216 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:45.559343+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 393216 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:46.559450+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 385024 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:47.559546+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 385024 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:48.559668+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 385024 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:49.559769+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 376832 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:50.559868+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 376832 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:51.559968+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 368640 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:52.560072+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 368640 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:53.560183+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 368640 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:54.560279+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 360448 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:55.560378+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 360448 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:56.560470+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 352256 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:57.560731+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 352256 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:58.560872+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 352256 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:59.561006+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 344064 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:00.561115+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 344064 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:01.561211+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 335872 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:02.561311+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 335872 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:03.561409+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 335872 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 26 11:59:06 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1540954702' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:04.561503+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 327680 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:05.561597+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 327680 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:06.561684+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 327680 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:07.561783+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 319488 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:08.561896+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 319488 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:09.561994+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 311296 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:10.562084+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 311296 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:11.562187+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 303104 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:12.562280+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 303104 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:13.562392+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 303104 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:14.562491+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 294912 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:15.562597+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 294912 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:16.562675+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 294912 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:17.562782+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 286720 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:18.562898+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 286720 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:19.562994+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 278528 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:20.563088+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 278528 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:21.563194+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 270336 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:22.563307+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 270336 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:23.563401+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76308480 unmapped: 262144 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:24.563500+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76316672 unmapped: 253952 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:25.563601+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76316672 unmapped: 253952 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:26.563705+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 245760 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:27.563797+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 245760 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:28.563916+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76333056 unmapped: 237568 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:29.564024+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76333056 unmapped: 237568 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:30.564136+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 229376 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:31.564253+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 229376 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:32.564360+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 229376 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:33.564458+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76349440 unmapped: 221184 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:34.564567+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76349440 unmapped: 221184 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:35.564703+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76349440 unmapped: 221184 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:36.564793+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 212992 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:37.564887+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 212992 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:38.564994+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 204800 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:39.565090+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 204800 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:40.565193+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 204800 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:41.565341+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 196608 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:42.565446+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 196608 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:43.565559+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76382208 unmapped: 188416 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:44.565661+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76382208 unmapped: 188416 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:45.565757+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76382208 unmapped: 188416 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:46.565848+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76390400 unmapped: 180224 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:47.565945+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76390400 unmapped: 180224 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:48.566056+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 172032 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:49.566152+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 172032 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:50.566251+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 163840 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:51.566346+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 163840 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:52.566439+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 163840 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:53.566530+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76414976 unmapped: 155648 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 6688 writes, 27K keys, 6688 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 6688 writes, 1232 syncs, 5.43 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6688 writes, 27K keys, 6688 commit groups, 1.0 writes per commit group, ingest: 19.31 MB, 0.03 MB/s
                                           Interval WAL: 6688 writes, 1232 syncs, 5.43 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c9696ab1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 1.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c9696ab1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 1.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c9696ab1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 1.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c9696ab1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 1.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.1      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.1      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.1      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c9696ab1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 1.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c9696ab1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 1.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c9696ab1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 1.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c9696ab090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c9696ab090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c9696ab090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c9696ab1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 1.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c9696ab1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 1.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:54.566751+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76472320 unmapped: 98304 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:55.566845+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 90112 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:56.566945+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 90112 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:57.567044+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 90112 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:58.567162+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 81920 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:59.567293+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 81920 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:00.567402+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 73728 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:01.567503+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 73728 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:02.568184+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 73728 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:03.568289+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 65536 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:04.568394+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 65536 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:05.568497+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 57344 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:06.568595+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 57344 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:07.568676+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 57344 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:08.568780+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 49152 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:09.568889+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76529664 unmapped: 40960 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:10.568995+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76529664 unmapped: 40960 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:11.569093+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 32768 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:12.569204+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 32768 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:13.569319+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76546048 unmapped: 24576 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:14.569415+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76546048 unmapped: 24576 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:15.569517+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 16384 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:16.569622+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 16384 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:17.569687+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 8192 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:18.569813+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 8192 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:19.569910+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 8192 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:20.570016+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 0 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:21.570244+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 0 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:22.570346+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 0 heap: 76570624 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:23.570480+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 1040384 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:24.570586+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 1040384 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:25.570686+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76587008 unmapped: 1032192 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:26.570784+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76587008 unmapped: 1032192 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:27.570894+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76587008 unmapped: 1032192 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:28.571005+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 1024000 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:29.571101+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76603392 unmapped: 1015808 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:30.571193+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 1007616 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:31.571398+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 1007616 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:32.571488+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 999424 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:33.571614+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 999424 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:34.571732+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 999424 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:35.571861+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 991232 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:36.571968+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 991232 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:37.572133+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 991232 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:38.572322+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 983040 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:39.572408+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 983040 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:40.572561+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 974848 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:41.572664+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 974848 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:42.572797+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 974848 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 325.796325684s of 325.806579590s, submitted: 8
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:43.572889+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76734464 unmapped: 884736 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:44.573037+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76734464 unmapped: 884736 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:45.573128+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76734464 unmapped: 884736 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:46.573262+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76734464 unmapped: 884736 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:47.573374+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76734464 unmapped: 884736 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:48.573489+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76734464 unmapped: 884736 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:49.573589+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76734464 unmapped: 884736 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:50.573672+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76734464 unmapped: 884736 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:51.573765+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 876544 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:52.573882+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 876544 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:53.573986+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 876544 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:54.574091+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 860160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:55.574195+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 860160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:56.574318+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 860160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:57.574407+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 851968 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:58.574541+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 843776 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:59.574652+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76783616 unmapped: 835584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:00.574751+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76783616 unmapped: 835584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:01.574846+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76791808 unmapped: 827392 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:02.574940+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76791808 unmapped: 827392 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:03.575039+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76791808 unmapped: 827392 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:04.575148+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76800000 unmapped: 819200 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:05.575245+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76800000 unmapped: 819200 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:06.575350+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76808192 unmapped: 811008 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:07.575454+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76808192 unmapped: 811008 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:08.575575+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76808192 unmapped: 811008 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:09.575679+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76816384 unmapped: 802816 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:10.575782+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76816384 unmapped: 802816 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:11.575879+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76824576 unmapped: 794624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:12.575977+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76824576 unmapped: 794624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:13.576074+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76824576 unmapped: 794624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:14.576163+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76832768 unmapped: 786432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:15.576255+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76832768 unmapped: 786432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:16.576389+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76840960 unmapped: 778240 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:17.576488+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76840960 unmapped: 778240 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:18.576608+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76849152 unmapped: 770048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:19.576671+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76849152 unmapped: 770048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:20.576838+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76849152 unmapped: 770048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:21.576933+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76857344 unmapped: 761856 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:22.577061+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76857344 unmapped: 761856 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:23.577150+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76865536 unmapped: 753664 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:24.577278+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76865536 unmapped: 753664 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:25.577379+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76865536 unmapped: 753664 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:26.577507+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76873728 unmapped: 745472 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:27.577598+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76873728 unmapped: 745472 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:28.577728+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76873728 unmapped: 745472 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:29.577828+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76881920 unmapped: 737280 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:30.577920+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76881920 unmapped: 737280 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:31.578050+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76881920 unmapped: 737280 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:32.578146+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76890112 unmapped: 729088 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:33.578246+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76890112 unmapped: 729088 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:34.578347+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76898304 unmapped: 720896 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:35.578448+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76898304 unmapped: 720896 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:36.578543+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 712704 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:37.578656+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 712704 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:38.578764+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 712704 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:39.578851+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 712704 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:40.578947+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76914688 unmapped: 704512 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:41.579039+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76914688 unmapped: 704512 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:42.579135+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76922880 unmapped: 696320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:43.579229+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76931072 unmapped: 688128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:44.579327+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76939264 unmapped: 679936 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:45.579423+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76939264 unmapped: 679936 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:46.579519+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76947456 unmapped: 671744 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:47.579612+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76947456 unmapped: 671744 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:48.579778+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76947456 unmapped: 671744 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:49.579869+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76955648 unmapped: 663552 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:50.579959+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76955648 unmapped: 663552 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:51.580060+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76963840 unmapped: 655360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:52.580152+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76963840 unmapped: 655360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:53.580244+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76963840 unmapped: 655360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:54.580369+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76963840 unmapped: 655360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:55.580556+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76963840 unmapped: 655360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:56.580668+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76963840 unmapped: 655360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:57.580768+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76963840 unmapped: 655360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:58.581009+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76972032 unmapped: 647168 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:59.581102+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76972032 unmapped: 647168 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:00.581167+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76972032 unmapped: 647168 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:01.581263+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76972032 unmapped: 647168 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:02.581379+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76972032 unmapped: 647168 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:03.581481+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76972032 unmapped: 647168 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:04.581580+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76972032 unmapped: 647168 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:05.581663+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76972032 unmapped: 647168 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:06.581753+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76972032 unmapped: 647168 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:07.581837+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76972032 unmapped: 647168 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:08.581966+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76972032 unmapped: 647168 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:09.582061+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76972032 unmapped: 647168 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:10.582161+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76972032 unmapped: 647168 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:11.582244+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76972032 unmapped: 647168 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:12.582332+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76972032 unmapped: 647168 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:13.582423+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76972032 unmapped: 647168 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:14.582512+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76972032 unmapped: 647168 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:15.582600+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76972032 unmapped: 647168 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:16.582683+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76972032 unmapped: 647168 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:17.582772+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76972032 unmapped: 647168 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:18.582897+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76972032 unmapped: 647168 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:19.582988+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76972032 unmapped: 647168 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:20.583091+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76972032 unmapped: 647168 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:21.583183+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76972032 unmapped: 647168 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:22.583271+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76972032 unmapped: 647168 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:23.583389+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76972032 unmapped: 647168 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:24.583508+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76972032 unmapped: 647168 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:25.583596+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76972032 unmapped: 647168 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:26.583684+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76972032 unmapped: 647168 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:27.583776+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76972032 unmapped: 647168 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:28.583889+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76980224 unmapped: 638976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:29.584594+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76980224 unmapped: 638976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:30.584680+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76980224 unmapped: 638976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:31.584768+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76980224 unmapped: 638976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:32.584874+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76988416 unmapped: 630784 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:33.584992+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76988416 unmapped: 630784 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:34.585104+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76988416 unmapped: 630784 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:35.585200+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76988416 unmapped: 630784 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:36.585300+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76988416 unmapped: 630784 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:37.585423+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76988416 unmapped: 630784 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:38.585551+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76988416 unmapped: 630784 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:39.585697+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76988416 unmapped: 630784 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:40.585799+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76988416 unmapped: 630784 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:41.585890+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76988416 unmapped: 630784 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:42.585986+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76988416 unmapped: 630784 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:43.586081+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76988416 unmapped: 630784 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:44.586217+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76988416 unmapped: 630784 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:45.589243+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76988416 unmapped: 630784 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:46.589346+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76988416 unmapped: 630784 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:47.589450+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76988416 unmapped: 630784 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:48.589610+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76988416 unmapped: 630784 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:49.589705+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76988416 unmapped: 630784 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:50.589832+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76988416 unmapped: 630784 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:51.589991+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76988416 unmapped: 630784 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:52.590109+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76988416 unmapped: 630784 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:53.590229+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76988416 unmapped: 630784 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:54.590397+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76988416 unmapped: 630784 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:55.590528+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76988416 unmapped: 630784 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:56.590678+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76996608 unmapped: 622592 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:57.590786+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76996608 unmapped: 622592 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:58.590942+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76996608 unmapped: 622592 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:59.591087+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76996608 unmapped: 622592 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:00.591231+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76996608 unmapped: 622592 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:01.591370+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76996608 unmapped: 622592 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:02.591487+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76996608 unmapped: 622592 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:03.591577+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76996608 unmapped: 622592 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:04.591732+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76996608 unmapped: 622592 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:05.591834+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76996608 unmapped: 622592 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:06.591933+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76996608 unmapped: 622592 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:07.592063+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76996608 unmapped: 622592 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:08.592214+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76996608 unmapped: 622592 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:09.592340+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76996608 unmapped: 622592 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:10.592459+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76996608 unmapped: 622592 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:11.592540+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76996608 unmapped: 622592 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:12.592674+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 76996608 unmapped: 622592 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:13.592799+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 614400 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:14.592927+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 614400 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:15.593067+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 614400 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:16.593173+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 614400 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:17.593318+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 614400 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:18.593472+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 614400 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:19.593666+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 614400 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:20.593794+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 614400 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:21.593889+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 614400 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:22.594025+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 614400 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:23.594155+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 614400 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:24.594291+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 614400 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:25.594434+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 614400 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:26.594526+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 614400 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:27.594680+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 614400 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:28.594810+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 614400 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:29.595010+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 614400 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:30.595140+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 614400 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:31.595244+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 614400 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:32.595391+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 614400 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:33.595536+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 614400 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:34.595629+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 614400 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:35.595766+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 614400 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:36.595862+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 614400 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:37.595959+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 614400 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:38.596112+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77012992 unmapped: 606208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:39.596248+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77012992 unmapped: 606208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:40.596384+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77012992 unmapped: 606208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:41.596497+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77012992 unmapped: 606208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:42.596627+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77012992 unmapped: 606208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:43.596801+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77012992 unmapped: 606208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:44.596933+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77012992 unmapped: 606208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:45.597027+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77012992 unmapped: 606208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:46.597124+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77012992 unmapped: 606208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:47.597224+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77012992 unmapped: 606208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:48.597471+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77012992 unmapped: 606208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:49.597577+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77012992 unmapped: 606208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:50.597707+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77012992 unmapped: 606208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:51.597832+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77012992 unmapped: 606208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:52.597952+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77012992 unmapped: 606208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:53.598073+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77012992 unmapped: 606208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:54.598198+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77012992 unmapped: 606208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:55.598332+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77012992 unmapped: 606208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:56.598467+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77012992 unmapped: 606208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:57.598597+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77012992 unmapped: 606208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:58.598726+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77029376 unmapped: 589824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:59.598833+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77029376 unmapped: 589824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:00.598970+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77029376 unmapped: 589824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:01.599072+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77029376 unmapped: 589824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:02.599204+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77029376 unmapped: 589824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:03.599378+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 581632 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:04.599529+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 581632 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:05.599627+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 581632 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:06.599803+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 581632 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:07.599955+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 581632 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:08.600083+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 581632 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:09.600226+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 581632 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:10.600359+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 581632 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:11.600477+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 581632 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:12.600608+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 581632 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:13.600729+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 581632 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:14.600854+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 581632 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:15.601006+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 581632 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:16.601104+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 581632 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:17.601202+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 581632 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:18.601371+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 581632 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:19.601494+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 581632 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:20.601666+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 581632 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:21.601763+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 581632 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:22.601883+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 581632 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:23.602001+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 581632 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:24.602143+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 581632 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:25.602256+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 581632 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:26.602349+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 581632 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:27.602467+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 581632 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:28.602602+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 581632 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:29.602711+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 581632 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:30.602812+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 581632 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:31.602964+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77045760 unmapped: 573440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:32.603110+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77045760 unmapped: 573440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:33.603215+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77045760 unmapped: 573440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:34.603360+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77045760 unmapped: 573440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:35.603486+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77045760 unmapped: 573440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:36.603744+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77045760 unmapped: 573440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:37.603864+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77045760 unmapped: 573440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:38.603985+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77045760 unmapped: 573440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:39.604700+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77045760 unmapped: 573440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:40.605066+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77045760 unmapped: 573440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:41.605154+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77045760 unmapped: 573440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:42.605307+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77045760 unmapped: 573440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:43.605462+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77045760 unmapped: 573440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:44.605591+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77045760 unmapped: 573440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:45.605723+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77045760 unmapped: 573440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:46.605813+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77045760 unmapped: 573440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:47.605965+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77045760 unmapped: 573440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:48.606096+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77045760 unmapped: 573440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:49.606283+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77045760 unmapped: 573440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:50.606387+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 565248 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:51.606487+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 565248 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:52.606622+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 565248 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:53.606748+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 565248 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:54.606851+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 565248 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:55.606992+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 565248 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:56.607090+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 565248 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:57.607269+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 565248 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:58.607428+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 ms_handle_reset con 0x55c96b7b0c00 session 0x55c96b72ef00
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: handle_auth_request added challenge on 0x55c96b7b1800
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 ms_handle_reset con 0x55c96bdf2c00 session 0x55c96be46000
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: handle_auth_request added challenge on 0x55c96b7b0c00
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 565248 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:59.607590+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 565248 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:00.607691+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 565248 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:01.607783+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 565248 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:02.607905+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 565248 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:03.607996+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 565248 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:04.608128+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 565248 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:05.608257+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 565248 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:06.608356+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 565248 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:07.608455+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 565248 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:08.608598+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 565248 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:09.608728+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 565248 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:10.608841+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 565248 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:11.608971+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 565248 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:12.609074+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 565248 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:13.609229+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 557056 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:14.609356+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 557056 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:15.609488+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 557056 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:16.609619+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 557056 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:17.609777+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 557056 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:18.609931+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 557056 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:19.610024+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 557056 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:20.610197+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 557056 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:21.610326+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 557056 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:22.610479+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 557056 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:23.610607+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77070336 unmapped: 548864 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:24.610734+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77070336 unmapped: 548864 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:25.610862+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77070336 unmapped: 548864 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:26.610994+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 540672 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:27.611134+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 540672 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:28.611278+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 532480 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:29.611416+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 532480 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:30.611552+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 532480 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:31.611680+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 532480 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:32.611800+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 532480 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:33.611955+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 532480 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:34.612084+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 532480 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:35.612688+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 532480 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:36.612783+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 532480 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:37.612933+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 532480 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:38.613043+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 532480 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:39.613717+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 532480 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:40.613837+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 532480 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:41.613924+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 532480 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:42.614048+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 532480 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:43.614178+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 532480 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:44.614301+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 532480 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:45.614414+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 532480 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:46.614533+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 532480 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:47.614674+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 532480 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:48.614790+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 532480 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:49.614893+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 532480 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:50.614998+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 532480 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:51.615495+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 532480 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:52.615624+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 532480 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:53.615762+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 532480 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:54.615887+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 532480 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:55.615977+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 532480 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:56.616077+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 532480 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:57.616214+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 532480 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:58.616349+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 524288 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:59.616452+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 524288 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:00.616586+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 524288 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:01.616712+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 524288 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:02.616817+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 524288 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:03.617303+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 524288 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:04.617462+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 524288 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:05.617624+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 524288 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:06.617781+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 524288 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:07.617902+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 524288 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:08.618051+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77103104 unmapped: 516096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:09.618153+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77103104 unmapped: 516096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:10.618305+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77103104 unmapped: 516096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:11.618408+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77103104 unmapped: 516096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:12.618533+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77103104 unmapped: 516096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:13.618694+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77103104 unmapped: 516096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:14.618829+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77103104 unmapped: 516096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:15.618984+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77103104 unmapped: 516096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:16.619114+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77103104 unmapped: 516096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:17.619219+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77103104 unmapped: 516096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:18.619360+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77103104 unmapped: 516096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:19.619492+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77103104 unmapped: 516096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:20.619629+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77103104 unmapped: 516096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:21.622876+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77103104 unmapped: 516096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:22.623023+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77103104 unmapped: 516096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:23.623163+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77103104 unmapped: 516096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:24.623295+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77103104 unmapped: 516096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:25.623426+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77103104 unmapped: 516096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:26.623556+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77103104 unmapped: 516096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:27.623678+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77103104 unmapped: 516096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:28.623804+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77103104 unmapped: 516096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:29.623935+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77103104 unmapped: 516096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:30.624024+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77103104 unmapped: 516096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:31.624112+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77103104 unmapped: 516096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:32.624210+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:33.624322+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77103104 unmapped: 516096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:34.624475+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77103104 unmapped: 516096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:35.624630+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77103104 unmapped: 516096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:36.624795+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77103104 unmapped: 516096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:37.624950+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77103104 unmapped: 516096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:38.625085+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77103104 unmapped: 516096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:39.625187+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77103104 unmapped: 516096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:40.625279+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77103104 unmapped: 516096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:41.625374+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77103104 unmapped: 516096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:42.625463+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77103104 unmapped: 516096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:43.625597+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 507904 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:44.625697+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 507904 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:45.625812+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 507904 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:46.625923+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 507904 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:47.626031+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 507904 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:48.626180+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 507904 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:49.626351+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 507904 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:50.626468+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 507904 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:51.626559+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 507904 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:52.626688+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 507904 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:53.626819+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 507904 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:54.626943+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 507904 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:55.627054+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 507904 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:56.627158+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 507904 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:57.627315+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 507904 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:58.627470+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 507904 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:59.627629+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 507904 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:00.628023+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 507904 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:01.628114+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 507904 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:02.628257+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 507904 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:03.628418+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 507904 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:04.628540+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 507904 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:05.628659+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 507904 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:06.628758+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 507904 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:07.628886+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 507904 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:08.629025+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 507904 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:09.629159+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 507904 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:10.629316+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 507904 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:11.629442+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 507904 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:12.629566+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 507904 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:13.629678+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 499712 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:14.629805+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 499712 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:15.629949+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 499712 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:16.630110+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 499712 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:17.630340+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 499712 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:18.630525+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 499712 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:19.630669+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 499712 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:20.630804+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 499712 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:21.630929+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 499712 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:22.631094+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 499712 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:23.631198+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 499712 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:24.631333+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 499712 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:25.631463+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 499712 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:26.631568+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 499712 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:27.631695+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 499712 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:28.632338+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 499712 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:29.632470+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 499712 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:30.632647+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 499712 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:31.632746+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 499712 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:32.632853+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 499712 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:33.632986+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 499712 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:34.633097+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 499712 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:35.633208+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 499712 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:36.633316+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 499712 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:37.633443+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 499712 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:38.633585+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 499712 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:39.633729+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 499712 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:40.633827+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 499712 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:41.633936+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 499712 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:42.634065+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 499712 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:43.634224+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 499712 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:44.634324+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 499712 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:45.634422+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 499712 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:46.634528+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 499712 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:47.634683+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 499712 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:48.634813+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 499712 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:49.634901+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 499712 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:50.635104+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 499712 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:51.635211+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 499712 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:52.635310+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77127680 unmapped: 491520 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:53.635433+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77127680 unmapped: 491520 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:54.635521+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77127680 unmapped: 491520 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:55.635668+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77127680 unmapped: 491520 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:56.635787+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77127680 unmapped: 491520 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:57.635903+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77127680 unmapped: 491520 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:58.636041+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77127680 unmapped: 491520 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:59.636177+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77127680 unmapped: 491520 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:00.636346+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77127680 unmapped: 491520 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:01.636457+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77127680 unmapped: 491520 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:02.636592+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77127680 unmapped: 491520 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:03.636722+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77127680 unmapped: 491520 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:04.636828+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77127680 unmapped: 491520 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:05.636928+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77127680 unmapped: 491520 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:06.637041+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77127680 unmapped: 491520 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:07.637172+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77127680 unmapped: 491520 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:08.637288+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77127680 unmapped: 491520 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:09.637439+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77127680 unmapped: 491520 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:10.637570+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77127680 unmapped: 491520 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:11.637669+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77127680 unmapped: 491520 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:12.637764+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77127680 unmapped: 491520 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:13.638131+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77127680 unmapped: 491520 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:14.638247+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77127680 unmapped: 491520 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:15.638342+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77127680 unmapped: 491520 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:16.638454+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77127680 unmapped: 491520 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:17.638573+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77127680 unmapped: 491520 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:18.638700+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77127680 unmapped: 491520 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:19.638838+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 483328 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:20.638982+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 483328 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:21.639070+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 483328 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:22.639160+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 483328 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:23.639271+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 483328 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:24.639376+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 483328 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:25.639465+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 483328 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:26.639557+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 483328 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:27.639678+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 483328 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:28.639813+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 483328 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:29.639932+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 483328 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:30.640053+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 483328 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:31.640158+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 483328 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:32.640312+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 483328 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:33.640436+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 483328 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:34.640555+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 483328 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:35.640682+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 483328 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:36.640792+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 483328 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:37.640917+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 483328 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:38.641071+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 483328 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:39.641198+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 483328 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:40.641330+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 483328 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:41.641430+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 483328 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:42.641586+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 483328 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:43.641673+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 483328 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:44.641810+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 483328 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:45.641939+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 483328 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:46.642048+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 483328 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:47.642153+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 483328 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:48.642295+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 483328 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:49.642389+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 483328 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:50.642513+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 483328 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:51.642613+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 483328 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:52.642728+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 483328 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:53.642881+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 483328 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:54.643013+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 483328 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:55.643164+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 483328 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:56.643257+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 483328 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:57.643415+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 483328 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:58.643587+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 483328 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:59.643743+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 483328 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:00.643852+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 483328 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:01.643957+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 475136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:02.644089+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 475136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:03.644235+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 475136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:04.644346+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 475136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:05.644482+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 475136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:06.644582+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 475136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:07.644711+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 475136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:08.644840+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 475136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:09.644936+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 475136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:10.645035+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 475136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:11.645142+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 475136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:12.645250+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 475136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:13.645361+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 475136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:14.645498+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 475136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:15.645660+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 475136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:16.645795+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 475136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:17.645922+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 475136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:18.646048+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 475136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:19.646185+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 475136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:20.646306+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 475136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:21.646416+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 475136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:22.646519+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 475136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:23.646624+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 475136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:24.646742+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 475136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:25.646837+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 475136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:26.647990+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 475136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:27.648086+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 475136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:28.648198+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 466944 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:29.648295+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 466944 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:30.648399+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 466944 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:31.648547+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:06 compute-0 ceph-osd[89074]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 466944 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: bluestore.MempoolThread(0x55c969789b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828081 data_alloc: 218103808 data_used: 196608
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:32.648676+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 466944 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:33.648780+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: do_command 'config diff' '{prefix=config diff}'
Nov 26 11:59:06 compute-0 ceph-osd[89074]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 26 11:59:06 compute-0 ceph-osd[89074]: do_command 'config show' '{prefix=config show}'
Nov 26 11:59:06 compute-0 ceph-osd[89074]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 26 11:59:06 compute-0 ceph-osd[89074]: do_command 'counter dump' '{prefix=counter dump}'
Nov 26 11:59:06 compute-0 ceph-osd[89074]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1122304 heap: 78667776 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: do_command 'counter schema' '{prefix=counter schema}'
Nov 26 11:59:06 compute-0 ceph-osd[89074]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:34.648882+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fca5c000/0x0/0x4ffc00000, data 0x11aa67/0x1c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 11:59:06 compute-0 ceph-osd[89074]: prioritycache tune_memory target: 4294967296 mapped: 77750272 unmapped: 1966080 heap: 79716352 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: tick
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_tickets
Nov 26 11:59:06 compute-0 ceph-osd[89074]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:35.648986+0000)
Nov 26 11:59:06 compute-0 ceph-osd[89074]: do_command 'log dump' '{prefix=log dump}'
Nov 26 11:59:06 compute-0 rsyslogd[960]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 11:59:06 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Nov 26 11:59:06 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/5919268' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 26 11:59:06 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Nov 26 11:59:06 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/722340622' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 26 11:59:06 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:59:06 compute-0 ceph-mgr[75197]: log_channel(audit) log [DBG] : from='client.14513 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 26 11:59:06 compute-0 ceph-mgr[75197]: log_channel(audit) log [DBG] : from='client.14515 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:59:06 compute-0 ceph-mon[74928]: pgmap v716: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:59:06 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/448173909' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 26 11:59:06 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1540954702' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 26 11:59:06 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/5919268' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 26 11:59:06 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/722340622' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 26 11:59:07 compute-0 ceph-mgr[75197]: log_channel(audit) log [DBG] : from='client.14517 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 26 11:59:07 compute-0 ceph-mgr[75197]: log_channel(audit) log [DBG] : from='client.14519 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:59:07 compute-0 ceph-mgr[75197]: log_channel(audit) log [DBG] : from='client.14521 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 26 11:59:07 compute-0 ceph-mgr[75197]: log_channel(audit) log [DBG] : from='client.14525 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 26 11:59:07 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v717: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:59:07 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Nov 26 11:59:07 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4252303560' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 26 11:59:07 compute-0 ceph-mon[74928]: from='client.14513 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 26 11:59:07 compute-0 ceph-mon[74928]: from='client.14515 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:59:07 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/4252303560' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 26 11:59:07 compute-0 ceph-mgr[75197]: log_channel(audit) log [DBG] : from='client.14529 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 26 11:59:08 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions"} v 0) v1
Nov 26 11:59:08 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3543085591' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 26 11:59:08 compute-0 ceph-mgr[75197]: log_channel(audit) log [DBG] : from='client.14533 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 26 11:59:08 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Nov 26 11:59:08 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1332323143' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 26 11:59:08 compute-0 ceph-mgr[75197]: log_channel(audit) log [DBG] : from='client.14537 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 26 11:59:08 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Nov 26 11:59:08 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2909511272' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 26 11:59:08 compute-0 ceph-mon[74928]: from='client.14517 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 26 11:59:08 compute-0 ceph-mon[74928]: from='client.14519 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:59:08 compute-0 ceph-mon[74928]: from='client.14521 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 26 11:59:08 compute-0 ceph-mon[74928]: from='client.14525 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 26 11:59:08 compute-0 ceph-mon[74928]: pgmap v717: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:59:08 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3543085591' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 26 11:59:08 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1332323143' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 26 11:59:08 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/2909511272' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 26 11:59:08 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 26 11:59:08 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 26 11:59:09 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump"} v 0) v1
Nov 26 11:59:09 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2985465215' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50 pruub=13.938455582s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 112.621139526s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50 pruub=13.938455582s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 112.621139526s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50 pruub=13.938455582s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 112.621139526s@ mbc={}] enter Started/Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50 pruub=13.938433647s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 112.621215820s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50 pruub=13.938420296s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 112.621215820s@ mbc={}] exit Reset 0.000025 1 0.000042
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50 pruub=13.938420296s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 112.621215820s@ mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50 pruub=13.938420296s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 112.621215820s@ mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50 pruub=13.938420296s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 112.621215820s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50 pruub=13.938420296s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 112.621215820s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50 pruub=13.938420296s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 112.621215820s@ mbc={}] enter Started/Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 10.061585 10 0.000057
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 10.069283 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.067429 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 11.067440 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50 pruub=13.938388824s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 112.621246338s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50 pruub=13.938374519s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 112.621246338s@ mbc={}] exit Reset 0.000026 1 0.000039
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50 pruub=13.938374519s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 112.621246338s@ mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 10.061619 10 0.000025
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50 pruub=13.938374519s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 112.621246338s@ mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 10.069250 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50 pruub=13.938374519s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 112.621246338s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.066954 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50 pruub=13.938374519s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 112.621246338s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 11.067075 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50 pruub=13.938374519s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 112.621246338s@ mbc={}] enter Started/Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50 pruub=13.938362122s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 112.621253967s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50 pruub=13.938333511s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 112.621253967s@ mbc={}] exit Reset 0.000037 1 0.000048
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50 pruub=13.938333511s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 112.621253967s@ mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50 pruub=13.938333511s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 112.621253967s@ mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50 pruub=13.938333511s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 112.621253967s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50 pruub=13.938333511s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 112.621253967s@ mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50 pruub=13.938333511s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 112.621253967s@ mbc={}] enter Started/Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[6.1( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50 pruub=13.938465118s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 112.621131897s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[6.1( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50 pruub=13.938138962s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 112.621131897s@ mbc={}] exit Reset 0.000337 1 0.000350
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[6.1( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50 pruub=13.938138962s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 112.621131897s@ mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[6.1( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50 pruub=13.938138962s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 112.621131897s@ mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[6.1( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50 pruub=13.938138962s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 112.621131897s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[6.1( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50 pruub=13.938138962s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 112.621131897s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[6.1( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50 pruub=13.938138962s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 112.621131897s@ mbc={}] enter Started/Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.17(unlocked)] enter Initial
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.17( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000021 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.17( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.17( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000009 1 0.000014
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.17( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.17( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.17( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.17( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.17( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.17( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.17( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 50 handle_osd_map epochs [50,50], i have 50, src has [1,50]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.15(unlocked)] enter Initial
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.17( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000196 1 0.000023
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.17( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000038 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000013
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000079 1 0.000026
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000019 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000110 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.1b(unlocked)] enter Initial
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.1b( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000014 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.1b( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.1b( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000011 1 0.000014
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.1b( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.1b( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.1b( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.1b( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.1b( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.1b( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.1b( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.1b( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000053 1 0.000019
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.1b( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.14(unlocked)] enter Initial
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.14( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000012 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.14( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.14( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000006
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.14( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.14( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.14( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.14( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.14( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.14( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.14( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.14( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000042 1 0.000020
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.14( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.14(unlocked)] enter Initial
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.14( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000023 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.14( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.14( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000003 1 0.000006
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.14( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.14( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.14( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.14( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.14( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.14( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.14( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.14( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000042 1 0.000018
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.14( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.18(unlocked)] enter Initial
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.18( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000014 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.18( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.18( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000007
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.18( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.18( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.18( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.18( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.18( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.18( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.18( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.18( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000025 1 0.000016
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.18( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.11(unlocked)] enter Initial
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.11( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000011 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.11( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.11( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000008
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.11( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.11( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.11( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.11( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000011 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.11( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.11( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.11( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.11( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000031 1 0.000026
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.11( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.11( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000016 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.11( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000064 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.11( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.1f(unlocked)] enter Initial
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.1f( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000014 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.1f( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.1f( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000007
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.1f( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.1f( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.1f( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.1f( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.1f( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.1f( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.1f( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.1f( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000063 1 0.000019
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.1f( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.10(unlocked)] enter Initial
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.10( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000011 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.10( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.10( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000006
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.10( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.10( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.10( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.10( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.10( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.10( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.10( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.10( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000025 1 0.000017
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.10( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.1(unlocked)] enter Initial
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.1( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000011 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.1( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.1( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000003 1 0.000006
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.1( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.1( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.1( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.1( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.1( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.1( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.1( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.1( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000032 1 0.000022
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.1( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.f(unlocked)] enter Initial
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.f( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000010 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.f( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.f( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000003 1 0.000006
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.f( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.f( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.f( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.f( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.f( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.f( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.f( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.f( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000034 1 0.000017
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.f( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.3(unlocked)] enter Initial
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.3( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000011 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.3( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.3( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000003 1 0.000006
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.3( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.3( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.3( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.3( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.3( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.3( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.3( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.3( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000038 1 0.000017
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.3( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.c(unlocked)] enter Initial
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.c( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000009 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.c( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.c( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000006
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.c( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.c( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.c( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.c( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.c( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.c( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.c( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.c( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000031 1 0.000018
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.c( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.d(unlocked)] enter Initial
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000009 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000003 1 0.000006
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000030 1 0.000017
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000012 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000050 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.e(unlocked)] enter Initial
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.e( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000018 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.e( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.e( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000007
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.e( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.e( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.e( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.e( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.e( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.e( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.e( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.e( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000028 1 0.000017
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.e( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.3(unlocked)] enter Initial
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.3( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000009 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.3( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.3( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000003 1 0.000006
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.3( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.3( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.3( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.3( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.3( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.3( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.3( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.3( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000024 1 0.000016
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.3( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.3( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000011 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.3( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000042 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.3( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.e(unlocked)] enter Initial
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.e( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000013 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.e( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.e( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000008
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.e( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.e( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.e( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.e( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.e( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.e( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.e( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.e( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000040 1 0.000038
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.e( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.f(unlocked)] enter Initial
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000010 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000013 1 0.000015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000035 1 0.000018
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000012 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000055 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.9(unlocked)] enter Initial
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.9( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000011 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.9( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.9( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000007
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.9( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.9( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.9( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.9( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.9( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.9( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.9( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.9( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000031 1 0.000019
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.9( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.9( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000011 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.9( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000050 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.9( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.f(unlocked)] enter Initial
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.f( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000009 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.f( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.f( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000013 1 0.000015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.f( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.f( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.f( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.f( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.f( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.f( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.f( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.f( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000029 1 0.000017
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.f( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.4(unlocked)] enter Initial
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.4( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000013 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.4( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.4( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000007
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.4( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.4( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.4( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.4( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.4( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.4( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.4( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.4( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000038 1 0.000023
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.4( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.b(unlocked)] enter Initial
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.b( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000016 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.b( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.b( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000003 1 0.000006
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.b( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.b( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.b( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.b( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.b( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.b( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.b( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.b( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000030 1 0.000017
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.b( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.9(unlocked)] enter Initial
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.9( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000010 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.9( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.9( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000003 1 0.000006
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.9( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.9( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.9( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.9( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.9( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.9( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.9( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.9( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000025 1 0.000018
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.9( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.1(unlocked)] enter Initial
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.1( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000009 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.1( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.1( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000003 1 0.000005
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.1( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.1( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.1( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.1( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.1( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.1( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.1( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.1( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000034 1 0.000016
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.1( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.1( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000011 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.1( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000053 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.1( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.f(unlocked)] enter Initial
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.f( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000010 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.f( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.f( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000003 1 0.000006
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.f( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.f( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.f( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.f( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.f( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.f( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.f( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.f( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000028 1 0.000036
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.f( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.9(unlocked)] enter Initial
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.9( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000010 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.9( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.9( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000242
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.9( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.9( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.9( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.9( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.9( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.9( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.9( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.9( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000044 1 0.000022
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.9( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.7(unlocked)] enter Initial
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000013 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000003 1 0.000006
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000025 1 0.000017
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000012 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000044 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.6(unlocked)] enter Initial
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.6( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000010 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.6( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.6( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000006
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.6( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.6( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.6( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.6( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.6( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.6( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.6( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.6( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000029 1 0.000017
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.6( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.6(unlocked)] enter Initial
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.6( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000018 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.6( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.6( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000003 1 0.000006
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.6( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.6( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.6( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.6( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.6( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.6( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.6( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.6( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000028 1 0.000021
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.6( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.5(unlocked)] enter Initial
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.5( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000009 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.5( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.5( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000003 1 0.000006
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.5( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.5( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.5( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.5( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.5( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.5( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.5( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.5( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000037 1 0.000016
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.5( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.5( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000011 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.5( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000055 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.5( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.4(unlocked)] enter Initial
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.4( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000010 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.4( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.4( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000006
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.4( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.4( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.4( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.4( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.4( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.4( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.4( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.4( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000030 1 0.000017
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.4( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.18(unlocked)] enter Initial
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.18( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000013 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.18( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.18( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000003 1 0.000006
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.18( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.18( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.18( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.18( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.18( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.18( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.18( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.18( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000024 1 0.000033
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.18( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.1f(unlocked)] enter Initial
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.1f( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000013 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.1f( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.1f( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000007
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.1f( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.1f( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.1f( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.1f( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.1f( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.1f( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.1f( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.1f( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000043 1 0.000018
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.1f( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.1f(unlocked)] enter Initial
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000009 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000010
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000027 1 0.000017
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000011 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000046 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.1d(unlocked)] enter Initial
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.1d( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000010 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.1d( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.1d( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000006
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.1d( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.1d( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.1d( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.1d( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.1d( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.1d( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.1d( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.1d( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000035 1 0.000018
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.1d( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.1d(unlocked)] enter Initial
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.1d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000013 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.1d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.1d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000010
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.1d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.1d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.1d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.1d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.1d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.1d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.1d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.1d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000034 1 0.000016
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.1d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.1d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000011 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.1d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000053 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.1d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.13(unlocked)] enter Initial
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.13( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000011 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.13( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.13( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000003 1 0.000006
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.13( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.13( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.13( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.13( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.13( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.13( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.13( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.13( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000033 1 0.000016
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.13( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.13(unlocked)] enter Initial
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000035 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000017 1 0.000032
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000052 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000061 1 0.000135
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000024 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000136 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.19(unlocked)] enter Initial
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000037 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000018 1 0.000055
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000056 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000059 1 0.000147
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000027 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000134 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.b(unlocked)] enter Initial
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000026 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000015 1 0.000033
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000052 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000064 1 0.001020
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000031 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000146 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.19(unlocked)] enter Initial
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.19( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000019 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.19( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.19( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000020 1 0.000024
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.19( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.19( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.19( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.19( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.19( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.19( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.19( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.19( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000109 1 0.000051
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.19( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.1b(unlocked)] enter Initial
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.1b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.001058 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.1b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.1b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000018
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.1b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.1b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.1b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.1b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.1b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.1b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.1b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.1b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000267 1 0.000076
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.1b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.1b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000034 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.1b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000314 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.1b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.1a(unlocked)] enter Initial
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.1a( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000018 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.1a( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.1a( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000009
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.1a( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.1a( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.1a( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.1a( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.1a( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.1a( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.1a( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.1a( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000035 1 0.000020
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.1a( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.17(unlocked)] enter Initial
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000011 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000006
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000031 1 0.000017
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000012 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000051 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.10(unlocked)] enter Initial
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.10( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000033 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.10( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.10( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000010
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.10( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.10( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.10( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.10( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.10( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.10( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.10( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.6(unlocked)] enter Initial
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.6( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000021 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.6( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.6( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000010
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.6( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.6( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.6( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.6( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.6( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.6( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.6( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.6( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000042 1 0.000021
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.6( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.10( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000778 1 0.000021
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.10( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.012230 2 0.000024
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.14( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.012148 2 0.000019
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.14( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.14( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.14( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.14( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.012080 2 0.000024
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.14( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.14( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.14( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.012005 2 0.000014
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000001 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.011721 2 0.000019
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000001 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.10( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.011645 2 0.000015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.10( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.10( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.10( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.f( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.011476 2 0.000017
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.f( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.f( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.f( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.011406 2 0.000015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000001 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.1( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.011643 2 0.000016
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.1( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.1( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000001 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.1( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.c( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.011376 2 0.000015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.c( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.c( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.c( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.e( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.010896 2 0.000028
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.e( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.e( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.e( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.f( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/GetLog 0.010573 2 0.000017
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.f( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering m=3 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.f( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.f( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering m=3 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.010535 2 0.000015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000001 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.b( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.010458 2 0.000015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.b( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.b( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000001 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.b( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.9( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.010394 2 0.000014
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.9( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.9( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.9( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.e( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.011368 2 0.000015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.e( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.e( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000001 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.e( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.010213 2 0.000016
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000001 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.6( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.009669 2 0.000024
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.6( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.6( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.6( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.18( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.009268 2 0.000015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.18( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.18( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.18( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.4( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.009394 2 0.000015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.4( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.4( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000001 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.4( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.1d( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.008923 2 0.000016
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.1d( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.1d( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000001 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.1d( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.008683 2 0.000063
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000001 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.010000 2 0.000020
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000001 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.6( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.009690 2 0.000015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.6( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.6( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000001 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.6( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.1f( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.009230 2 0.000018
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.1f( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.1f( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.1f( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.19( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.007593 2 0.001147
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.19( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.19( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.19( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.1a( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.005989 2 0.000019
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.1a( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.1a( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[8.1a( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.005172 2 0.000020
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000001 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.10( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.004942 2 0.000027
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.10( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.10( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 50 handle_osd_map epochs [50,50], i have 50, src has [1,50]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.10( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.1e(unlocked)] enter Initial
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.1e( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000015 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.1e( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.1e( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000009
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.1e( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.1e( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.1e( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.1e( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.1e( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.1e( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.1e( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.1e( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000046 1 0.000023
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.1e( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.d(unlocked)] enter Initial
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.d( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000021 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.d( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.d( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000007
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.d( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.d( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.d( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.d( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.d( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.d( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.d( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.d( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000035 1 0.000031
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.d( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.7(unlocked)] enter Initial
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.7( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000026 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.7( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.7( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000022
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.7( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.7( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.7( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.7( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.7( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.7( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.7( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.7( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000040 1 0.000018
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.7( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.4(unlocked)] enter Initial
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.4( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000011 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.4( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.4( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000003 1 0.000006
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.4( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.4( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.4( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.4( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.4( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.4( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.4( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.4( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000025 1 0.000016
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.4( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.8(unlocked)] enter Initial
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.8( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000010 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.8( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.8( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000003 1 0.000006
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.8( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.8( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.8( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.8( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.8( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.8( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.8( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.8( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000023 1 0.000016
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.8( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.1(unlocked)] enter Initial
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.1( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000009 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.1( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.1( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000003 1 0.000006
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.1( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.1( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.1( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.1( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.1( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.1( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.1( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.1( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000036 1 0.000016
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.1( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.e(unlocked)] enter Initial
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.e( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000009 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.e( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.e( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000006
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.e( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.e( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.e( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.e( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.e( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.e( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.e( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.e( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000027 1 0.000016
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.e( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.15(unlocked)] enter Initial
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.15( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000019 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.15( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.15( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000008
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.15( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.15( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.15( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.15( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.15( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.15( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.15( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.15( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000042 1 0.000020
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.15( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.16(unlocked)] enter Initial
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.16( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000010 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.16( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.16( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000003 1 0.000006
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.16( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.16( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.16( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.16( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.16( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.16( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.16( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.16( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000027 1 0.000032
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.16( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.9(unlocked)] enter Initial
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.9( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000009 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.9( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.9( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000003 1 0.000005
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.9( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.9( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.9( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.9( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.9( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.9( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.9( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.9( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000029 1 0.000032
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.9( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.17( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.015972 2 0.000189
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.17( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.17( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.17(unlocked)] enter Initial
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[11.17( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.17( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000012 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.17( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.17( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000006
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.17( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.17( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.17( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.17( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.17( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.17( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.17( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.17( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000043 1 0.000016
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.17( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.d( v 49'65 lc 0'0 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.004750 2 0.000018
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.d( v 49'65 lc 0'0 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.d( v 49'65 lc 0'0 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.d( v 49'65 lc 0'0 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.004701 2 0.000018
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.004638 2 0.000015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.004577 2 0.000014
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.1( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.004500 2 0.000015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.1( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.1( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.1( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.15( v 49'65 lc 0'0 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.004076 2 0.000018
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.15( v 49'65 lc 0'0 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.15( v 49'65 lc 0'0 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.15( v 49'65 lc 0'0 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.e( v 49'65 lc 0'0 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.004470 2 0.000016
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.e( v 49'65 lc 0'0 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.e( v 49'65 lc 0'0 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.e( v 49'65 lc 0'0 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.004015 2 0.000015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.9( v 49'65 lc 0'0 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.003946 2 0.000016
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.9( v 49'65 lc 0'0 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.9( v 49'65 lc 0'0 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.003872 2 0.000016
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.9( v 49'65 lc 0'0 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.006110 2 0.000021
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 50 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:30.017688+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 53 sent 51 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:40:59.321728+0000 osd.0 (osd.0) 52 : cluster [DBG] 5.4 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:40:59.335851+0000 osd.0 (osd.0) 53 : cluster [DBG] 5.4 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 60366848 unmapped: 1441792 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 53) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:40:59.321728+0000 osd.0 (osd.0) 52 : cluster [DBG] 5.4 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:40:59.335851+0000 osd.0 (osd.0) 53 : cluster [DBG] 5.4 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 50 handle_osd_map epochs [50,51], i have 50, src has [1,51]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 50 handle_osd_map epochs [50,51], i have 51, src has [1,51]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.1f( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.990591 2 0.000018
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.1f( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.999897 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.1f( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.1f( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.1d( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.990944 2 0.000019
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.1( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.984352 2 0.000021
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.1( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.988921 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.1d( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.999944 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.1d( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.1d( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.6( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.991121 2 0.000013
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.6( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.000846 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.6( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.000919 2 0.000023
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.6( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.000973 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.000985 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.984547 2 0.000057
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.989230 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.1( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.991094 2 0.000011
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.001155 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.9( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.b( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.991345 2 0.000012
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.b( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.001850 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.991369 2 0.000013
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.b( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.001959 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.b( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.4( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.9( v 49'65 lc 0'0 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.984440 2 0.000044
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.9( v 49'65 lc 0'0 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 0.988460 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.9( v 49'65 lc 0'0 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.984522 2 0.000012
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.988583 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.9( v 49'65 lc 44'56 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.14( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.991877 2 0.000014
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.14( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.004085 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.14( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.14( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.991929 2 0.000025
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.004238 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.1b( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.004358 2 0.000038
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.004477 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.004490 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.17( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.988558 2 0.000027
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.17( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.004761 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.17( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.17( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000048 1 0.000060
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.1a( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.990071 2 0.000013
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.1a( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.996114 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.1a( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.1b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.996202 2 0.000053
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.1b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.996526 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.1b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.996538 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.1b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1b] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.1b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1b] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.19( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.990159 2 0.000025
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.19( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.997890 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.1b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000023 1 0.000035
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.1b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.19( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.1b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.1b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.1b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.19( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.1b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.999284 2 0.000093
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.999451 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.999541 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.18( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.991592 2 0.000012
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.18( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.000902 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.18( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.18( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000026 1 0.000036
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.000627 2 0.000023
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.983923 2 0.000032
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.990102 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.000694 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.000707 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.991601 2 0.000011
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.000368 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.1d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.000436 2 0.000023
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.1d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.000497 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.13( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.1d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.000524 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.1d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1d] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.1d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000056 1 0.000080
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1d] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000005 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.1d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000029 1 0.000058
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.1d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.1d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.3( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.003081 2 0.000022
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.1d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.3( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.003133 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.1d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000005 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.1d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.3( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.003144 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.3( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.3] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.3( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.3] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.3( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000043 1 0.000036
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.3( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.3( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.1( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.992130 2 0.000013
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.3( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.3( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.1( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.003824 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.3( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.1( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.1( v 44'2 (0'0,44'2] local-lis/les=50/51 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.991958 2 0.000012
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.002217 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.f( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.1( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.002353 2 0.000022
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.003531 2 0.000025
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.003592 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.003602 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.d] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.d] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000035 1 0.000037
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.c( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.992318 2 0.000015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.c( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.003745 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.c( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.c( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.992431 2 0.000013
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.003892 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.3( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.f( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.992522 2 0.000013
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.f( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.004050 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.f( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.f( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.e( v 49'65 lc 0'0 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.985448 2 0.000052
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.e( v 49'65 lc 0'0 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 0.989966 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.e( v 49'65 lc 0'0 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.e( v 49'65 lc 44'48 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.003244 2 0.000024
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.003307 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.003318 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000021 1 0.000030
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000011 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.e( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.992553 2 0.000013
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.e( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.003507 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.e( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.e( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.e( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.992430 2 0.000012
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.e( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.003842 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.e( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.e( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.f( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.992610 2 0.000036
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.f( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering 1.003246 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.f( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 unknown m=3 mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.f( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 activating+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.d( v 49'65 lc 0'0 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.985973 2 0.000036
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.d( v 49'65 lc 0'0 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 0.990780 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.d( v 49'65 lc 0'0 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.d( v 49'65 lc 44'50 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.985718 2 0.000016
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.989652 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.9( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.003470 2 0.000023
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.9( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.003528 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.9( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.003538 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.9( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.9] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.9( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.9] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.9( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000021 1 0.000030
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.9( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.9( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.9( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.9( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.9( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.986105 2 0.000022
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.990865 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.6( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.992624 2 0.000013
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.6( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.002360 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.6( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.6( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.997271 2 0.000024
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.997331 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.997341 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000033 1 0.000033
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.991439 2 0.000015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.996670 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.6( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.9( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.993005 2 0.000012
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.9( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.003440 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.9( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.9( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.993427 2 0.000012
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.005474 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.18( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.14( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.993486 2 0.000014
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.14( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.005626 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.14( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.14( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.15( v 49'65 lc 0'0 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.986318 2 0.000037
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.15( v 49'65 lc 0'0 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 0.990454 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.15( v 49'65 lc 0'0 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.15( v 49'65 lc 44'46 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.986445 2 0.000012
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.991061 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.999540 2 0.000098
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.000591 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.000674 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.b] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.b] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000020 1 0.000028
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.4( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.993166 2 0.000011
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.4( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.002606 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.4( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.4( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.5( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.002724 2 0.000022
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.5( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.002788 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.5( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.002797 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.5( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.5] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.5( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.5] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.5( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000020 1 0.000029
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.5( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.5( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.5( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.5( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.5( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.10( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.993746 2 0.000016
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.10( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 1.005433 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.10( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.10( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.993834 2 0.000011
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.005633 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.1f( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.11( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.005744 2 0.000039
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.11( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.005816 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.11( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.005835 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.11( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.11] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.11( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.11] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.11( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000020 1 0.000029
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.11( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.11( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.11( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.11( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.11( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.001713 2 0.000127
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.001877 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.001958 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000018 1 0.000026
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.10( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.992097 2 0.000014
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.10( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.997836 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.10( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.10( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.1( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.002414 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.1( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.004878 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.1( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.1( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.1( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000034 1 0.002500
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.1( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.1( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.1( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000046 1 0.000058
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.1( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.1( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.11] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.3] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.016086 7 0.000095
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.016385 7 0.000050
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.d] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.016008 7 0.000102
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.9] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.016529 7 0.000038
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.016038 7 0.000031
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.016626 7 0.000052
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.5] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1d] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.b] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1b] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 51 handle_osd_map epochs [51,51], i have 51, src has [1,51]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 51 handle_osd_map epochs [51,51], i have 51, src has [1,51]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.1f( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.1f( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006080 4 0.000114
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.1f( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.1f( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.1f( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.1d( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.6( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.9( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.b( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.4( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.9( v 49'65 lc 44'56 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.1b( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.14( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.17( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.19( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.18( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.13( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.1( v 44'2 (0'0,44'2] local-lis/les=50/51 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.1d( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010174 4 0.000107
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.1d( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.1d( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.1d( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.6( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010127 4 0.000054
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.6( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.6( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.6( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.f( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.3( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.f( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.e( v 49'65 lc 44'48 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.e( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.e( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.f( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.d( v 49'65 lc 44'50 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.6( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.6( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.9( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.18( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.14( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.15( v 49'65 lc 44'46 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010256 4 0.000043
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.9( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010247 4 0.000031
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.9( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.9( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.9( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.4( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.10( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.1f( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010303 4 0.000206
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.10( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.c( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.b( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010308 4 0.000045
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.b( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.4( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010313 4 0.000027
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.b( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.4( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.b( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.4( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.4( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010292 4 0.000028
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.9( v 49'65 lc 44'56 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.010305 5 0.000063
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.9( v 49'65 lc 44'56 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.1b( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010257 4 0.000029
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.1b( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.1b( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.1b( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.14( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010283 4 0.000026
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.14( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.14( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.14( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.17( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010193 4 0.000030
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.17( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.17( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.17( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010148 4 0.000028
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.19( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010123 4 0.000030
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.19( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.19( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.18( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010048 4 0.000028
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.18( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.18( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.18( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.19( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010040 4 0.000024
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.13( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010005 4 0.000044
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.13( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.13( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.13( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.1( v 44'2 (0'0,44'2] local-lis/les=50/51 n=1 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009910 4 0.000037
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.1( v 44'2 (0'0,44'2] local-lis/les=50/51 n=1 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.1( v 44'2 (0'0,44'2] local-lis/les=50/51 n=1 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.1( v 44'2 (0'0,44'2] local-lis/les=50/51 n=1 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.f( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009819 4 0.000035
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.f( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.f( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.f( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.3( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009663 4 0.000038
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.3( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.3( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.3( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.f( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009651 4 0.000026
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.f( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.f( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.f( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.e( v 49'65 lc 44'48 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.009621 4 0.000040
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.e( v 49'65 lc 44'48 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.e( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009509 4 0.000028
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.e( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009456 4 0.000025
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.e( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.e( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.e( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.e( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.e( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.e( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.f( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/Activating 0.009419 4 0.000043
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.f( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.d( v 49'65 lc 44'50 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.009382 4 0.000035
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.d( v 49'65 lc 44'50 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009339 4 0.000025
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009187 4 0.000041
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.6( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009167 4 0.000026
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.6( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.6( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.6( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.6( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009047 4 0.000024
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.6( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.6( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.6( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.9( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009026 4 0.000023
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.9( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.9( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.9( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.18( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009004 4 0.000023
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.18( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.14( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008981 4 0.000023
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.18( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.14( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.18( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.14( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.14( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.15( v 49'65 lc 44'46 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.008954 4 0.000033
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.15( v 49'65 lc 44'46 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008919 4 0.000028
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.4( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008813 4 0.000025
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.4( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.4( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.4( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.10( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.008684 4 0.000035
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.10( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.1f( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008645 4 0.000025
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.10( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008466 4 0.000024
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.1f( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.10( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.10( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.1f( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[11.10( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[7.1f( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.c( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010000 4 0.002964
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.c( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.9( v 49'65 lc 44'56 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000473 1 0.000021
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.c( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.9( v 49'65 lc 44'56 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.c( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.9( v 49'65 lc 44'56 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.9( v 49'65 lc 44'56 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.1( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.030200 7 0.000046
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.1( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.1( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.030326 7 0.000039
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.9( v 51'66 (0'0,51'66] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 49'65 mlcod 49'65 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.085117 1 0.000031
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.9( v 51'66 (0'0,51'66] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 49'65 mlcod 49'65 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.9( v 51'66 (0'0,51'66] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 49'65 mlcod 49'65 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.9( v 51'66 (0'0,51'66] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 49'65 mlcod 49'65 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.e( v 51'66 (0'0,51'66] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 49'65 mlcod 49'65 active+recovery_wait mbc={255={}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.085418 2 0.000020
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.e( v 51'66 (0'0,51'66] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 49'65 mlcod 49'65 active+recovery_wait mbc={255={}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.e( v 51'66 (0'0,51'66] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 49'65 mlcod 49'65 active+recovery_wait mbc={255={}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000004 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.e( v 51'66 (0'0,51'66] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 49'65 mlcod 49'65 active+recovery_wait mbc={255={}}] enter Started/Primary/Active/Recovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.e( v 51'66 (0'0,51'66] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 49'65 mlcod 49'65 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.003458 1 0.000038
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.e( v 51'66 (0'0,51'66] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 49'65 mlcod 49'65 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.e( v 51'66 (0'0,51'66] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 49'65 mlcod 49'65 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.e( v 51'66 (0'0,51'66] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 49'65 mlcod 49'65 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.f( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.088890 2 0.000030
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.f( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.f( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000004 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.f( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/Recovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.286248 2 0.000016
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.286330 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:31.017816+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.f( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 33'4 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.378647 1 0.000054
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.d( v 51'66 (0'0,51'66] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 49'65 mlcod 49'65 active+recovery_wait mbc={255={}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.467582 2 0.000015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.d( v 51'66 (0'0,51'66] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 49'65 mlcod 49'65 active+recovery_wait mbc={255={}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.d( v 51'66 (0'0,51'66] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 49'65 mlcod 49'65 active+recovery_wait mbc={255={}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000005 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.d( v 51'66 (0'0,51'66] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 49'65 mlcod 49'65 active+recovery_wait mbc={255={}}] enter Started/Primary/Active/Recovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.f( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 33'4 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.f( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 33'4 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000049 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.f( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 33'4 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.d( v 51'66 (0'0,51'66] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 49'65 mlcod 49'65 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.003452 1 0.000101
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.d( v 51'66 (0'0,51'66] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 49'65 mlcod 49'65 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.d( v 51'66 (0'0,51'66] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 49'65 mlcod 49'65 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.d( v 51'66 (0'0,51'66] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 49'65 mlcod 49'65 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.15( v 51'66 (0'0,51'66] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 49'65 mlcod 49'65 active+recovery_wait mbc={255={}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.470982 2 0.000014
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.15( v 51'66 (0'0,51'66] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 49'65 mlcod 49'65 active+recovery_wait mbc={255={}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.15( v 51'66 (0'0,51'66] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 49'65 mlcod 49'65 active+recovery_wait mbc={255={}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000004 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.15( v 51'66 (0'0,51'66] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 49'65 mlcod 49'65 active+recovery_wait mbc={255={}}] enter Started/Primary/Active/Recovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.479390 2 0.000013
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.479427 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.15( v 51'66 (0'0,51'66] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 49'65 mlcod 49'65 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.003517 1 0.000042
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.15( v 51'66 (0'0,51'66] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 49'65 mlcod 49'65 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.10( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.474510 2 0.000015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.10( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.10( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000005 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.10( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.15( v 51'66 (0'0,51'66] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 49'65 mlcod 49'65 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[10.15( v 51'66 (0'0,51'66] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 49'65 mlcod 49'65 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 60932096 unmapped: 876544 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 495138 data_alloc: 218103808 data_used: 36864
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.10( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 33'4 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.069719 1 0.000058
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.10( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 33'4 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.10( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 33'4 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[8.10( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 33'4 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.1( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.536294 1 0.000029
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.1( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.536385 1 0.000012
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.264627 1 0.000102
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.071522 1 0.000051
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 51 heartbeat osd_stat(store_statfs(0x4fe140000/0x0/0x4ffc00000, data 0x3e934/0x8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.1( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 DELETING pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.067241 1 0.000063
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.1( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.603571 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.1( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.633835 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.618740 2 0.000014
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.618776 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000061 1 0.000119
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.9( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 DELETING pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.074126 1 0.000030
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.9( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.610542 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.9( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.640889 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.707442 2 0.000014
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.707471 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000049 1 0.000075
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.d( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 DELETING pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.170328 2 0.000087
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.d( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.435001 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.d( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 1.737754 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.3( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 DELETING pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.185096 2 0.000071
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.3( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.256654 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.3( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 1.752130 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.803346 2 0.000017
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.803379 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000057 1 0.000102
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.5( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 DELETING pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.191220 2 0.000140
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.5( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.191344 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.5( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 1.826700 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.7( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 DELETING pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.183944 2 0.000108
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.7( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.184052 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.7( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 1.907602 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.b( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 DELETING pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.154241 2 0.000190
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.b( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.154427 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 51 pg[6.b( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 1.974485 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 51 handle_osd_map epochs [52,52], i have 51, src has [1,52]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.13( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.021910 5 0.000031
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.13( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.5( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=7 mbc={}] exit Started/Stray 1.022158 5 0.000023
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.13( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.5( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.5( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.11( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.022048 5 0.000025
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.11( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.11( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.b( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=4 mbc={}] exit Started/Stray 1.022470 5 0.000024
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.b( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.b( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.17( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=3 mbc={}] exit Started/Stray 1.022817 5 0.000034
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.17( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=3 mbc={}] enter Started/ReplicaActive
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.17( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=3 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.9( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.023222 5 0.000049
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.9( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.9( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.d( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=8 mbc={}] exit Started/Stray 1.023968 5 0.000033
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.d( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=8 mbc={}] enter Started/ReplicaActive
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.d( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=8 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.1( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=7 mbc={}] exit Started/Stray 1.021554 5 0.000033
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.1( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.1( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.3( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=8 mbc={}] exit Started/Stray 1.024244 5 0.000046
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.3( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=8 mbc={}] enter Started/ReplicaActive
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.3( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=8 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.f( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=7 mbc={}] exit Started/Stray 1.023719 5 0.000030
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.f( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.1d( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.024415 5 0.000025
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.1d( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.1d( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.1f( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.024466 5 0.000029
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.1f( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.1f( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.f( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.19( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=7 mbc={}] exit Started/Stray 1.024620 5 0.000022
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.19( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.19( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.15( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=4 mbc={}] exit Started/Stray 1.024874 5 0.000033
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.15( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.15( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.7( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.021891 5 0.003331
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.7( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.7( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.1b( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=2 mbc={}] exit Started/Stray 1.024956 5 0.000022
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.1b( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=2 mbc={}] enter Started/ReplicaActive
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.1b( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=2 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.11] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.5] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.b] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.d] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1d] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.9] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.3] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1b] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 1.027290 5 0.000023
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 1.027319 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000044 1 0.000047
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.13( v 44'389 lc 39'112 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.008645 4 0.000082
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.13( v 44'389 lc 39'112 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.13( v 44'389 lc 39'112 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000039 1 0.000053
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.13( v 44'389 lc 39'112 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[6.f( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 DELETING pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.027262 2 0.000146
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[6.f( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.027345 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[6.f( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 2.070841 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.055162 1 0.000020
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.5( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.064090 4 0.000098
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.5( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.5( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000057 1 0.000078
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.5( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.052593 1 0.000186
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.11( v 44'389 lc 39'191 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.116962 4 0.000178
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.11( v 44'389 lc 39'191 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.11( v 44'389 lc 39'191 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000079 1 0.000063
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.11( v 44'389 lc 39'191 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.038484 1 0.000040
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.b( v 44'389 lc 39'78 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.155621 4 0.000054
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.b( v 44'389 lc 39'78 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.b( v 44'389 lc 39'78 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000054 1 0.000051
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.b( v 44'389 lc 39'78 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.031487 1 0.000040
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.1( v 44'389 lc 39'45 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.187048 4 0.000100
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.1( v 44'389 lc 39'45 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.1( v 44'389 lc 39'45 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000060 1 0.000057
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.1( v 44'389 lc 39'45 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.052547 1 0.000039
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.17( v 44'389 lc 39'38 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=3 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.240393 4 0.000083
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.17( v 44'389 lc 39'38 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=3 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.17( v 44'389 lc 39'38 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=3 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000051 1 0.000054
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.17( v 44'389 lc 39'38 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=3 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.024275 1 0.000046
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.d( v 44'389 lc 39'107 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=8 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.264503 4 0.000081
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.d( v 44'389 lc 39'107 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=8 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.d( v 44'389 lc 39'107 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=8 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000062 1 0.000025
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.d( v 44'389 lc 39'107 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=8 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:32.017902+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.059914 1 0.000065
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.1f( v 44'389 lc 39'177 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.324378 4 0.000061
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.1f( v 44'389 lc 39'177 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.1f( v 44'389 lc 39'177 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000019 1 0.000034
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.1f( v 44'389 lc 39'177 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.039281 1 0.000014
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.9( v 44'389 lc 39'99 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.364306 4 0.000049
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.9( v 44'389 lc 39'99 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.9( v 44'389 lc 39'99 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000056 1 0.000055
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.9( v 44'389 lc 39'99 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.038400 1 0.000022
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.19( v 44'389 lc 39'69 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.402617 4 0.000132
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.19( v 44'389 lc 39'69 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.19( v 44'389 lc 39'69 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000086 1 0.000067
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.19( v 44'389 lc 39'69 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.052634 1 0.000048
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.7( v 44'389 lc 39'46 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.455407 4 0.000047
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.7( v 44'389 lc 39'46 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.7( v 44'389 lc 39'46 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000075 1 0.000021
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.7( v 44'389 lc 39'46 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.038442 1 0.000060
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.1b( v 44'389 lc 39'200 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=2 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.494055 4 0.000288
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.1b( v 44'389 lc 39'200 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=2 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.1b( v 44'389 lc 39'200 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=2 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000051 1 0.000022
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.1b( v 44'389 lc 39'200 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=2 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.017484 1 0.000054
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.15( v 44'389 lc 39'143 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.511911 4 0.000046
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.15( v 44'389 lc 39'143 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.15( v 44'389 lc 39'143 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000103 1 0.000093
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.15( v 44'389 lc 39'143 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 63971328 unmapped: 983040 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.031523 1 0.000038
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.1d( v 44'389 lc 39'41 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.543862 4 0.000090
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.1d( v 44'389 lc 39'41 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.1d( v 44'389 lc 39'41 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000039 1 0.000064
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.1d( v 44'389 lc 39'41 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.038530 1 0.000042
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.f( v 44'389 lc 39'52 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.582626 4 0.000296
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.f( v 44'389 lc 39'52 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.f( v 44'389 lc 39'52 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000043 1 0.000045
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.f( v 44'389 lc 39'52 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.052721 1 0.000033
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.3( v 44'389 lc 39'54 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=8 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.635674 4 0.000067
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.3( v 44'389 lc 39'54 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=8 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.3( v 44'389 lc 39'54 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=8 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000064 1 0.000097
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.3( v 44'389 lc 39'54 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=8 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.059627 1 0.000020
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 52 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 52 handle_osd_map epochs [53,53], i have 52, src has [1,53]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.829527 1 0.000043
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.868392 1 0.000050
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.985236 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 2.007427 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.720206 1 0.000034
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.985012 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 2.007882 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000042 1 0.000075
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000036 1 0.000048
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000051 1 0.000051
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.921289 1 0.000022
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.985395 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 2.007333 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000024 1 0.000228
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000013 1 0.000022
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.582406 1 0.000020
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.985229 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 2.008468 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000026 1 0.000039
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000014 1 0.000021
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.349560 1 0.000051
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.985031 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 2.008920 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000020 1 0.000033
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000012 1 0.000021
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.660703 1 0.000026
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.985247 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 2.009240 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000018 1 0.000027
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000011 1 0.000030
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.745562 1 0.000065
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.985317 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 2.006891 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000019 1 0.000028
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000009 1 0.000019
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.289853 1 0.000142
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.985386 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 2.009651 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000018 1 0.000029
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000009 1 0.000018
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.402840 1 0.000063
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.985355 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 2.009794 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000017 1 0.000027
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000009 1 0.000018
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.621587 1 0.000148
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.985411 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 2.009898 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000020 1 0.000030
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000011 1 0.000020
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.530038 1 0.000020
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.985455 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 2.010093 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000018 1 0.000028
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000010 1 0.000018
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.473688 1 0.000016
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.985330 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 2.010303 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000020 1 0.000030
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000011 1 0.000018
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.441963 1 0.000053
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.985570 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 2.010462 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000018 1 0.000027
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000009 1 0.000017
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.491603 1 0.000016
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.985584 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 2.007495 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000017 1 0.000026
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000010 1 0.000018
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.797818 1 0.000044
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.986691 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 2.009181 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000038 1 0.001684
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000018 1 0.000024
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.987059 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 2.009145 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000029 1 0.001974
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000016 1 0.000022
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.002078 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000035 1 0.002112
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=11
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=11
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.003424 3 0.000028
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=15
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=15
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.003235 3 0.000015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=17
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=17
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=12
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=12
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.003396 3 0.000017
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.003195 3 0.000015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=14
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=14
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.003319 3 0.000016
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=11
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=11
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.003093 3 0.000015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=19
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=19
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.003473 3 0.000015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=15
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=15
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.003229 3 0.000014
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000033 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=5
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=5
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.003249 3 0.000015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=13
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=13
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.003237 3 0.000014
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=13
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=13
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=12
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=12
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.003878 3 0.000014
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.004652 3 0.001539
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=9
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=9
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.003560 3 0.000014
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000041 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=9
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=9
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.003417 3 0.000020
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=11
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=11
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.003287 3 0.000020
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=6
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=6
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.003430 3 0.000035
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 53 heartbeat osd_stat(store_statfs(0x4fe12a000/0x0/0x4ffc00000, data 0x42c20/0xa3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:33.018003+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 64126976 unmapped: 827392 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 53 handle_osd_map epochs [53,54], i have 53, src has [1,54]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 53 handle_osd_map epochs [54,54], i have 54, src has [1,54]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.995919 2 0.000032
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.999260 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.996932 2 0.000160
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.000543 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.996484 2 0.000037
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.000097 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.996871 2 0.000035
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.000147 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.997119 2 0.000030
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.000411 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.997311 2 0.000034
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.000582 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.997572 2 0.000035
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.000695 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.996952 2 0.000031
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.000863 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.997867 2 0.000154
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.001205 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.997864 2 0.000302
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.001378 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.997861 2 0.000045
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.001211 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.998171 2 0.000037
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.001605 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.996460 2 0.000024
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.999950 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.997064 2 0.000031
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.000551 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.997367 2 0.000184
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.002174 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.998707 2 0.000064
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.002175 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001293 4 0.000149
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000015 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002558 4 0.000123
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002587 4 0.000063
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000049 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000104 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002975 4 0.000104
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000042 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002920 4 0.000115
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000012 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002967 4 0.000136
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002874 4 0.000042
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002759 4 0.000040
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003034 4 0.000219
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000101 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002895 4 0.000160
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002821 4 0.000135
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002620 4 0.000090
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002735 4 0.000131
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002935 4 0.000101
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002962 4 0.000202
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002844 4 0.000046
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000017 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 54 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:34.018098+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 55 sent 53 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:41:03.388911+0000 osd.0 (osd.0) 54 : cluster [DBG] 5.7 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:41:03.403013+0000 osd.0 (osd.0) 55 : cluster [DBG] 5.7 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 688128 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 55) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:41:03.388911+0000 osd.0 (osd.0) 54 : cluster [DBG] 5.7 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:41:03.403013+0000 osd.0 (osd.0) 55 : cluster [DBG] 5.7 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:35.018247+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 671744 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-mgr[75197]: log_channel(audit) log [DBG] : from='client.14547 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:36.018350+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 647168 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 617716 data_alloc: 218103808 data_used: 45056
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:37.018444+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 647168 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 54 heartbeat osd_stat(store_statfs(0x4fe11f000/0x0/0x4ffc00000, data 0x46320/0xac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 54 handle_osd_map epochs [55,55], i have 54, src has [1,55]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 55 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 18.099187 26 0.000063
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 55 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 18.106174 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 55 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 19.103671 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 55 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 19.103685 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 55 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 55 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 18.097737 26 0.000058
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 55 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 18.106151 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 55 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55 pruub=13.900985718s) [1] r=-1 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 120.620323181s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 55 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 19.104031 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 55 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 19.104046 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 55 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 55 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55 pruub=13.900940895s) [1] r=-1 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.620323181s@ mbc={}] exit Reset 0.000072 1 0.000111
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 55 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55 pruub=13.900940895s) [1] r=-1 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.620323181s@ mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 55 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55 pruub=13.900940895s) [1] r=-1 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.620323181s@ mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 55 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55 pruub=13.900940895s) [1] r=-1 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.620323181s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 55 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55 pruub=13.900940895s) [1] r=-1 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.620323181s@ mbc={}] exit Start 0.000007 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 55 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55 pruub=13.901856422s) [1] r=-1 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 120.621246338s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 55 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55 pruub=13.900940895s) [1] r=-1 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.620323181s@ mbc={}] enter Started/Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 55 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55 pruub=13.901811600s) [1] r=-1 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.621246338s@ mbc={}] exit Reset 0.000070 1 0.000101
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 55 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55 pruub=13.901811600s) [1] r=-1 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.621246338s@ mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 55 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55 pruub=13.901811600s) [1] r=-1 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.621246338s@ mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 55 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55 pruub=13.901811600s) [1] r=-1 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.621246338s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 55 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55 pruub=13.901811600s) [1] r=-1 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.621246338s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 55 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55 pruub=13.901811600s) [1] r=-1 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.621246338s@ mbc={}] enter Started/Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 55 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 18.098482 26 0.000047
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 55 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 18.106313 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 55 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 19.104360 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 55 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 19.104373 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 55 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 55 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55 pruub=13.901603699s) [1] r=-1 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 120.621353149s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 55 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55 pruub=13.901576042s) [1] r=-1 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.621353149s@ mbc={}] exit Reset 0.000043 1 0.000067
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 55 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55 pruub=13.901576042s) [1] r=-1 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.621353149s@ mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 55 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55 pruub=13.901576042s) [1] r=-1 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.621353149s@ mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 55 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55 pruub=13.901576042s) [1] r=-1 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.621353149s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 55 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55 pruub=13.901576042s) [1] r=-1 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.621353149s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 55 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55 pruub=13.901576042s) [1] r=-1 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.621353149s@ mbc={}] enter Started/Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 55 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 18.098602 26 0.000077
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 55 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 18.106201 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 55 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 19.104412 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 55 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 19.104422 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 55 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 55 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55 pruub=13.901453972s) [1] r=-1 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 120.621376038s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 55 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55 pruub=13.901418686s) [1] r=-1 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.621376038s@ mbc={}] exit Reset 0.000046 1 0.000065
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 55 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55 pruub=13.901418686s) [1] r=-1 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.621376038s@ mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 55 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55 pruub=13.901418686s) [1] r=-1 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.621376038s@ mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 55 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55 pruub=13.901418686s) [1] r=-1 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.621376038s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 55 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55 pruub=13.901418686s) [1] r=-1 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.621376038s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 55 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55 pruub=13.901418686s) [1] r=-1 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.621376038s@ mbc={}] enter Started/Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 55 handle_osd_map epochs [55,55], i have 55, src has [1,55]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:38.018551+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 57 sent 55 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:41:07.492866+0000 osd.0 (osd.0) 56 : cluster [DBG] 5.3 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:41:07.506985+0000 osd.0 (osd.0) 57 : cluster [DBG] 5.3 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 64200704 unmapped: 753664 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 2.19 deep-scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 2.19 deep-scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 55 handle_osd_map epochs [56,56], i have 55, src has [1,56]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 57) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:41:07.492866+0000 osd.0 (osd.0) 56 : cluster [DBG] 5.3 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:41:07.506985+0000 osd.0 (osd.0) 57 : cluster [DBG] 5.3 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 56 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=-1 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.001033 6 0.000053
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 56 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=-1 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 56 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=-1 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 56 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=-1 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.001529 6 0.000113
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 56 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=-1 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 56 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=-1 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 56 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=-1 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.002512 7 0.000045
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 56 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=-1 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 56 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=-1 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 56 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=-1 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.003070 7 0.000061
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 56 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=-1 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 56 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=-1 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 56 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=-1 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000034 1 0.000030
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 56 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=-1 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 56 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=-1 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000124 1 0.000034
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 56 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=-1 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 56 pg[6.2( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=-1 lpr=55 DELETING pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.006838 1 0.000040
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 56 pg[6.2( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=-1 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.006994 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 56 pg[6.2( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=-1 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.010100 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 56 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=-1 lpr=55 pi=[43,55)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.009824 3 0.000031
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 56 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=-1 lpr=55 pi=[43,55)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.009841 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 56 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=-1 lpr=55 pi=[43,55)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 56 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=-1 lpr=55 pi=[43,55)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 56 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=-1 lpr=55 pi=[43,55)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000075 1 0.000034
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 56 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=-1 lpr=55 pi=[43,55)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 56 pg[6.a( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=-1 lpr=55 DELETING pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.014239 1 0.000114
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 56 pg[6.a( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=-1 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.014324 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 56 pg[6.a( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=-1 lpr=55 pi=[43,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.016865 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 56 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=-1 lpr=55 pi=[43,55)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.083736 3 0.000042
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 56 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=-1 lpr=55 pi=[43,55)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.083784 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 56 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=-1 lpr=55 pi=[43,55)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 56 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=-1 lpr=55 pi=[43,55)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 56 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=-1 lpr=55 pi=[43,55)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000076 1 0.000068
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 56 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=-1 lpr=55 pi=[43,55)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 56 pg[6.6( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=-1 lpr=55 DELETING pi=[43,55)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.080206 2 0.000105
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 56 pg[6.6( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=-1 lpr=55 pi=[43,55)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.080331 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 56 pg[6.6( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=-1 lpr=55 pi=[43,55)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 1.091242 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 56 pg[6.e( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=-1 lpr=55 DELETING pi=[43,55)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.020860 2 0.000079
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 56 pg[6.e( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=-1 lpr=55 pi=[43,55)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.020966 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 56 pg[6.e( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=55) [1] r=-1 lpr=55 pi=[43,55)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 1.106332 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:39.018705+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 59 sent 57 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:41:08.484751+0000 osd.0 (osd.0) 58 : cluster [DBG] 2.19 deep-scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:41:08.498885+0000 osd.0 (osd.0) 59 : cluster [DBG] 2.19 deep-scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 64200704 unmapped: 753664 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 59) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:41:08.484751+0000 osd.0 (osd.0) 58 : cluster [DBG] 2.19 deep-scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:41:08.498885+0000 osd.0 (osd.0) 59 : cluster [DBG] 2.19 deep-scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:40.018844+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 64208896 unmapped: 745472 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:41.018954+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 655360 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 613487 data_alloc: 218103808 data_used: 53248
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 56 handle_osd_map epochs [57,58], i have 56, src has [1,58]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.810311317s of 12.056137085s, submitted: 525
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 22.117707 33 0.000079
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 22.124772 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 23.122247 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 23.122268 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58 pruub=9.882452011s) [1] r=-1 lpr=58 pi=[43,58)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 120.620277405s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58 pruub=9.882416725s) [1] r=-1 lpr=58 pi=[43,58)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.620277405s@ mbc={}] exit Reset 0.000057 1 0.000082
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58 pruub=9.882416725s) [1] r=-1 lpr=58 pi=[43,58)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.620277405s@ mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 22.116854 33 0.000055
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 22.124521 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 23.123296 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 23.123315 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58 pruub=9.882416725s) [1] r=-1 lpr=58 pi=[43,58)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.620277405s@ mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58 pruub=9.882416725s) [1] r=-1 lpr=58 pi=[43,58)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.620277405s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58 pruub=9.883186340s) [1] r=-1 lpr=58 pi=[43,58)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 120.621376038s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58 pruub=9.882416725s) [1] r=-1 lpr=58 pi=[43,58)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.620277405s@ mbc={}] exit Start 0.000022 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58 pruub=9.882416725s) [1] r=-1 lpr=58 pi=[43,58)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.620277405s@ mbc={}] enter Started/Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58 pruub=9.883127213s) [1] r=-1 lpr=58 pi=[43,58)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.621376038s@ mbc={}] exit Reset 0.000091 1 0.000124
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58 pruub=9.883127213s) [1] r=-1 lpr=58 pi=[43,58)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.621376038s@ mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58 pruub=9.883127213s) [1] r=-1 lpr=58 pi=[43,58)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.621376038s@ mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58 pruub=9.883127213s) [1] r=-1 lpr=58 pi=[43,58)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.621376038s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58 pruub=9.883127213s) [1] r=-1 lpr=58 pi=[43,58)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.621376038s@ mbc={}] exit Start 0.000012 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58 pruub=9.883127213s) [1] r=-1 lpr=58 pi=[43,58)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.621376038s@ mbc={}] enter Started/Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 58 handle_osd_map epochs [57,58], i have 58, src has [1,58]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.3(unlocked)] enter Initial
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.3( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=0 pi=[50,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000027 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.3( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=0 pi=[50,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.3( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000014
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.3( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.3( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.3( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.3( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.3( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.3( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.3( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.3( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000161 1 0.000028
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.3( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.f(unlocked)] enter Initial
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/52/0 sis=57) [0] r=0 lpr=0 pi=[50,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000040 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/52/0 sis=57) [0] r=0 lpr=0 pi=[50,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/52/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000017 1 0.000033
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/52/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/52/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/52/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/52/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000052 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/52/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/52/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/52/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/52/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000064 1 0.000144
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/52/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.b(unlocked)] enter Initial
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=0 pi=[50,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000026 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=0 pi=[50,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000019
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000057 1 0.000031
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.7(unlocked)] enter Initial
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.7( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=0 pi=[50,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000025 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.7( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=0 pi=[50,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.7( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000012
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.7( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.7( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.7( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.7( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.7( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.7( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.7( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.7( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000067 1 0.000021
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.7( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.f( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/52/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/GetLog 0.000743 2 0.000052
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.f( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/52/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 mlcod 0'0 peering m=3 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.f( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/52/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.f( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/52/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 mlcod 0'0 peering m=3 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.000428 2 0.000031
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetLog 0.001406 2 0.000036
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000021 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetMissing 0.000020 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.7( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.000641 2 0.000214
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.7( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.7( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 58 pg[6.7( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 58 heartbeat osd_stat(store_statfs(0x4fe11a000/0x0/0x4ffc00000, data 0x49a6b/0xb2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:42.019047+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 622592 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 58 handle_osd_map epochs [58,59], i have 58, src has [1,59]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.f( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/52/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.004894 2 0.000048
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.f( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/52/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering 1.005768 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.f( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/52/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 mlcod 0'0 unknown m=3 mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.f( v 37'39 lc 33'1 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=50/50 les/c/f=51/52/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 lcod 0'0 mlcod 0'0 activating+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.005048 2 0.000147
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering 1.006718 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 mlcod 0'0 unknown m=2 mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 59 handle_osd_map epochs [59,59], i have 59, src has [1,59]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=57/59 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 mlcod 0'0 activating+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.7( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.004886 2 0.000027
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.7( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 1.005622 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.7( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.7( v 37'39 lc 33'15 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 lcod 0'0 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.005368 2 0.000087
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 1.005945 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58) [1] r=-1 lpr=58 pi=[43,58)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.011550 7 0.000405
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58) [1] r=-1 lpr=58 pi=[43,58)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58) [1] r=-1 lpr=58 pi=[43,58)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58) [1] r=-1 lpr=58 pi=[43,58)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.011517 7 0.000069
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58) [1] r=-1 lpr=58 pi=[43,58)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58) [1] r=-1 lpr=58 pi=[43,58)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.f( v 37'39 lc 33'1 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=50/50 les/c/f=51/52/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.f( v 37'39 lc 33'1 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/50 les/c/f=59/52/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/Activating 0.001729 4 0.000100
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.f( v 37'39 lc 33'1 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/50 les/c/f=59/52/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.f( v 37'39 lc 33'1 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/50 les/c/f=59/52/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000084 1 0.000064
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.f( v 37'39 lc 33'1 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/50 les/c/f=59/52/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.f( v 37'39 lc 33'1 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/50 les/c/f=59/52/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000004 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.f( v 37'39 lc 33'1 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/50 les/c/f=59/52/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/Recovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.7( v 37'39 lc 33'15 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=57/59 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.7( v 37'39 lc 33'15 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/50 les/c/f=59/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.002287 4 0.000072
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.7( v 37'39 lc 33'15 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/50 les/c/f=59/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=57/59 n=2 ec=43/21 lis/c=57/50 les/c/f=59/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.002411 5 0.000138
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=57/59 n=2 ec=43/21 lis/c=57/50 les/c/f=59/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/50 les/c/f=59/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.002138 4 0.000058
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/50 les/c/f=59/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/50 les/c/f=59/52/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.206502 2 0.000026
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/50 les/c/f=59/52/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/50 les/c/f=59/52/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/50 les/c/f=59/52/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.7( v 37'39 lc 33'15 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/50 les/c/f=59/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.205715 2 0.000014
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.7( v 37'39 lc 33'15 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/50 les/c/f=59/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.7( v 37'39 lc 33'15 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/50 les/c/f=59/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000006 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.7( v 37'39 lc 33'15 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/50 les/c/f=59/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:43.019132+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 61 sent 59 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:41:12.570614+0000 osd.0 (osd.0) 60 : cluster [DBG] 5.1e scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:41:12.584755+0000 osd.0 (osd.0) 61 : cluster [DBG] 5.1e scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/50 les/c/f=59/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.069752 1 0.000067
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/50 les/c/f=59/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/50 les/c/f=59/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000025 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/50 les/c/f=59/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=57/59 n=2 ec=43/21 lis/c=57/50 les/c/f=59/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.275574 1 0.000011
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=57/59 n=2 ec=43/21 lis/c=57/50 les/c/f=59/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=57/59 n=2 ec=43/21 lis/c=57/50 les/c/f=59/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000009 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=57/59 n=2 ec=43/21 lis/c=57/50 les/c/f=59/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58) [1] r=-1 lpr=58 pi=[43,58)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.278387 2 0.000027
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58) [1] r=-1 lpr=58 pi=[43,58)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.278405 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58) [1] r=-1 lpr=58 pi=[43,58)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58) [1] r=-1 lpr=58 pi=[43,58)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58) [1] r=-1 lpr=58 pi=[43,58)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.348203 2 0.000018
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58) [1] r=-1 lpr=58 pi=[43,58)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.348272 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58) [1] r=-1 lpr=58 pi=[43,58)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58) [1] r=-1 lpr=58 pi=[43,58)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=57/59 n=2 ec=43/21 lis/c=57/50 les/c/f=59/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.135926 1 0.000087
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=57/59 n=2 ec=43/21 lis/c=57/50 les/c/f=59/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=57/59 n=2 ec=43/21 lis/c=57/50 les/c/f=59/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=57/59 n=2 ec=43/21 lis/c=57/50 les/c/f=59/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/50 les/c/f=59/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.411596 2 0.000015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/50 les/c/f=59/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/50 les/c/f=59/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000005 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/50 les/c/f=59/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/50 les/c/f=59/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.073986 1 0.000044
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/50 les/c/f=59/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/50 les/c/f=59/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/50 les/c/f=59/51/0 sis=57) [0] r=0 lpr=58 pi=[50,57)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58) [1] r=-1 lpr=58 pi=[43,58)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.209305 1 0.000035
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58) [1] r=-1 lpr=58 pi=[43,58)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58) [1] r=-1 lpr=58 pi=[43,58)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.139545 1 0.000127
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58) [1] r=-1 lpr=58 pi=[43,58)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 64380928 unmapped: 573440 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.4( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58) [1] r=-1 lpr=58 DELETING pi=[43,58)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.044673 2 0.000230
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.4( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58) [1] r=-1 lpr=58 pi=[43,58)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.254028 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.4( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58) [1] r=-1 lpr=58 pi=[43,58)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 1.543995 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.c( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58) [1] r=-1 lpr=58 DELETING pi=[43,58)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.052035 2 0.000100
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.c( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58) [1] r=-1 lpr=58 pi=[43,58)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.191673 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 59 pg[6.c( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=58) [1] r=-1 lpr=58 pi=[43,58)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 1.551880 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 61) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:41:12.570614+0000 osd.0 (osd.0) 60 : cluster [DBG] 5.1e scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:41:12.584755+0000 osd.0 (osd.0) 61 : cluster [DBG] 5.1e scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 59 handle_osd_map epochs [60,60], i have 59, src has [1,60]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 59 handle_osd_map epochs [60,60], i have 60, src has [1,60]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 60 pg[6.5(unlocked)] enter Initial
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 60 pg[6.5( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=0 pi=[50,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000039 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 60 pg[6.5( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=0 pi=[50,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 60 pg[6.5( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000021
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 60 pg[6.5( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 60 pg[6.5( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 60 pg[6.5( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 60 pg[6.5( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 60 pg[6.5( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 60 pg[6.5( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 60 pg[6.5( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 60 pg[6.5( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000079 1 0.000035
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 60 pg[6.5( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 60 pg[6.d(unlocked)] enter Initial
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 60 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=0 pi=[50,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000022 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 60 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=0 pi=[50,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 60 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000009
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 60 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 60 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 60 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 60 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 60 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 60 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 60 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 60 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000047 1 0.000023
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 60 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 60 pg[6.d( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetLog 0.001025 2 0.000021
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 60 pg[6.d( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 60 pg[6.d( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 60 pg[6.d( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 60 pg[6.5( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetLog 0.001411 2 0.000040
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 60 pg[6.5( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 60 pg[6.5( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetMissing 0.000018 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 60 pg[6.5( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:44.019273+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 64430080 unmapped: 524288 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 60 handle_osd_map epochs [60,61], i have 60, src has [1,61]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 60 handle_osd_map epochs [61,61], i have 61, src has [1,61]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 61 pg[6.5( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.000252 2 0.001031
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 61 pg[6.5( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering 1.002306 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 61 pg[6.5( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=37'39 mlcod 0'0 unknown m=2 mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 61 pg[6.d( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.000890 2 0.000052
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 61 pg[6.5( v 37'39 lc 33'6 (0'0,37'39] local-lis/les=60/61 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=37'39 lcod 0'0 mlcod 0'0 activating+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 61 pg[6.d( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering 1.002044 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 61 pg[6.d( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=37'39 mlcod 0'0 unknown m=2 mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 61 pg[6.d( v 37'39 lc 33'10 (0'0,37'39] local-lis/les=60/61 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=37'39 lcod 0'0 mlcod 0'0 activating+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 61 pg[6.5( v 37'39 lc 33'6 (0'0,37'39] local-lis/les=60/61 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 61 pg[6.d( v 37'39 lc 33'10 (0'0,37'39] local-lis/les=60/61 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 61 pg[6.d( v 37'39 lc 33'10 (0'0,37'39] local-lis/les=60/61 n=1 ec=43/21 lis/c=60/50 les/c/f=61/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.002138 5 0.000406
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 61 pg[6.d( v 37'39 lc 33'10 (0'0,37'39] local-lis/les=60/61 n=1 ec=43/21 lis/c=60/50 les/c/f=61/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 61 pg[6.5( v 37'39 lc 33'6 (0'0,37'39] local-lis/les=60/61 n=2 ec=43/21 lis/c=60/50 les/c/f=61/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.002299 5 0.000413
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 61 pg[6.5( v 37'39 lc 33'6 (0'0,37'39] local-lis/les=60/61 n=2 ec=43/21 lis/c=60/50 les/c/f=61/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 61 pg[6.d( v 37'39 lc 33'10 (0'0,37'39] local-lis/les=60/61 n=1 ec=43/21 lis/c=60/50 les/c/f=61/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000089 1 0.000079
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 61 pg[6.d( v 37'39 lc 33'10 (0'0,37'39] local-lis/les=60/61 n=1 ec=43/21 lis/c=60/50 les/c/f=61/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 61 pg[6.d( v 37'39 lc 33'10 (0'0,37'39] local-lis/les=60/61 n=1 ec=43/21 lis/c=60/50 les/c/f=61/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000004 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 61 pg[6.d( v 37'39 lc 33'10 (0'0,37'39] local-lis/les=60/61 n=1 ec=43/21 lis/c=60/50 les/c/f=61/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 61 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=60/61 n=1 ec=43/21 lis/c=60/50 les/c/f=61/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.067063 1 0.000031
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 61 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=60/61 n=1 ec=43/21 lis/c=60/50 les/c/f=61/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 61 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=60/61 n=1 ec=43/21 lis/c=60/50 les/c/f=61/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000024 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 61 pg[6.5( v 37'39 lc 33'6 (0'0,37'39] local-lis/les=60/61 n=2 ec=43/21 lis/c=60/50 les/c/f=61/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.067162 1 0.000048
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 61 pg[6.5( v 37'39 lc 33'6 (0'0,37'39] local-lis/les=60/61 n=2 ec=43/21 lis/c=60/50 les/c/f=61/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 61 pg[6.5( v 37'39 lc 33'6 (0'0,37'39] local-lis/les=60/61 n=2 ec=43/21 lis/c=60/50 les/c/f=61/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000007 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 61 pg[6.5( v 37'39 lc 33'6 (0'0,37'39] local-lis/les=60/61 n=2 ec=43/21 lis/c=60/50 les/c/f=61/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 61 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=60/61 n=1 ec=43/21 lis/c=60/50 les/c/f=61/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 61 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=60/61 n=2 ec=43/21 lis/c=60/50 les/c/f=61/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.125829 1 0.000080
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 61 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=60/61 n=2 ec=43/21 lis/c=60/50 les/c/f=61/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 61 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=60/61 n=2 ec=43/21 lis/c=60/50 les/c/f=61/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000016 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 61 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=60/61 n=2 ec=43/21 lis/c=60/50 les/c/f=61/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:45.019416+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 63 sent 61 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:41:14.664704+0000 osd.0 (osd.0) 62 : cluster [DBG] 2.18 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:41:14.678848+0000 osd.0 (osd.0) 63 : cluster [DBG] 2.18 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 64503808 unmapped: 450560 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 61 heartbeat osd_stat(store_statfs(0x4fe10f000/0x0/0x4ffc00000, data 0x50850/0xbe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 63) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:41:14.664704+0000 osd.0 (osd.0) 62 : cluster [DBG] 2.18 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:41:14.678848+0000 osd.0 (osd.0) 63 : cluster [DBG] 2.18 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:46.019565+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 64512000 unmapped: 442368 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 645853 data_alloc: 218103808 data_used: 61440
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:47.019702+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 64561152 unmapped: 393216 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:48.019839+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 64561152 unmapped: 393216 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 61 handle_osd_map epochs [62,63], i have 61, src has [1,63]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 63 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=44'389 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 14.627068 22 0.000241
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 63 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active 14.630199 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 63 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary 15.630358 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 63 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=44'389 mlcod 0'0 active mbc={}] exit Started 15.630375 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 63 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=44'389 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 63 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63 pruub=9.372240067s) [2] r=-1 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 active pruub 126.707878113s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 63 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63 pruub=9.372200966s) [2] r=-1 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 126.707878113s@ mbc={}] exit Reset 0.000067 1 0.000104
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 63 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63 pruub=9.372200966s) [2] r=-1 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 126.707878113s@ mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 63 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63 pruub=9.372200966s) [2] r=-1 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 126.707878113s@ mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 63 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63 pruub=9.372200966s) [2] r=-1 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 126.707878113s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 63 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63 pruub=9.372200966s) [2] r=-1 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 126.707878113s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 63 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63 pruub=9.372200966s) [2] r=-1 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 126.707878113s@ mbc={}] enter Started/Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 63 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=44'389 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 14.627321 22 0.000048
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 63 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active 14.630408 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 63 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary 15.631118 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 63 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=44'389 mlcod 0'0 active mbc={}] exit Started 15.631145 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 63 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=44'389 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 63 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63 pruub=9.372087479s) [2] r=-1 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 active pruub 126.708137512s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 63 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63 pruub=9.372042656s) [2] r=-1 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 126.708137512s@ mbc={}] exit Reset 0.000073 1 0.000138
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 63 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63 pruub=9.372042656s) [2] r=-1 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 126.708137512s@ mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 63 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63 pruub=9.372042656s) [2] r=-1 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 126.708137512s@ mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 63 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63 pruub=9.372042656s) [2] r=-1 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 126.708137512s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 63 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63 pruub=9.372042656s) [2] r=-1 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 126.708137512s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 63 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63 pruub=9.372042656s) [2] r=-1 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 126.708137512s@ mbc={}] enter Started/Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 63 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=44'389 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 14.627708 22 0.000047
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 63 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active 14.630555 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 63 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary 15.632040 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 63 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=44'389 mlcod 0'0 active mbc={}] exit Started 15.632066 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 63 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=44'389 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 63 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63 pruub=9.371743202s) [2] r=-1 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 active pruub 126.708282471s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 63 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=44'389 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 14.627862 22 0.000039
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 63 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active 14.630748 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 63 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary 15.630719 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 63 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=44'389 mlcod 0'0 active mbc={}] exit Started 15.632810 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 63 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=44'389 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 63 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63 pruub=9.371612549s) [2] r=-1 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 active pruub 126.708442688s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 63 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63 pruub=9.371588707s) [2] r=-1 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 126.708442688s@ mbc={}] exit Reset 0.000039 1 0.000297
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 63 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63 pruub=9.371588707s) [2] r=-1 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 126.708442688s@ mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 63 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63 pruub=9.371588707s) [2] r=-1 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 126.708442688s@ mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 63 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63 pruub=9.371588707s) [2] r=-1 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 126.708442688s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 63 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63 pruub=9.371588707s) [2] r=-1 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 126.708442688s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 63 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63 pruub=9.371588707s) [2] r=-1 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 126.708442688s@ mbc={}] enter Started/Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 63 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63 pruub=9.371382713s) [2] r=-1 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 126.708282471s@ mbc={}] exit Reset 0.000393 1 0.000588
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 63 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63 pruub=9.371382713s) [2] r=-1 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 126.708282471s@ mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 63 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63 pruub=9.371382713s) [2] r=-1 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 126.708282471s@ mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 63 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63 pruub=9.371382713s) [2] r=-1 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 126.708282471s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 63 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63 pruub=9.371382713s) [2] r=-1 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 126.708282471s@ mbc={}] exit Start 0.000100 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 63 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63 pruub=9.371382713s) [2] r=-1 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 126.708282471s@ mbc={}] enter Started/Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:49.019941+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 63 handle_osd_map epochs [63,64], i have 63, src has [1,64]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=-1 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.744472 3 0.000065
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=-1 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.744509 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=-1 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped mbc={}] exit Reset 0.000064 1 0.000091
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped mbc={}] exit Start 0.000005 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=-1 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.744498 3 0.000114
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=-1 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.744523 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=-1 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped mbc={}] exit Reset 0.000028 1 0.000043
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=-1 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.743821 3 0.000190
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=-1 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.743975 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=-1 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped mbc={}] exit Reset 0.000083 1 0.000108
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped mbc={}] exit Start 0.000017 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=-1 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.744228 3 0.000075
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=-1 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.744261 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=-1 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped mbc={}] exit Reset 0.000106 1 0.000129
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped mbc={}] exit Start 0.000007 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 64 handle_osd_map epochs [64,64], i have 64, src has [1,64]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.002872 2 0.000125
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000024 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.004479 2 0.000031
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000021 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.004144 2 0.000025
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000009 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.004364 2 0.000049
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000016 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 64 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 64700416 unmapped: 1302528 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:50.020046+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 65 sent 63 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:41:19.713463+0000 osd.0 (osd.0) 64 : cluster [DBG] 5.2 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:41:19.727608+0000 osd.0 (osd.0) 65 : cluster [DBG] 5.2 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 65) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:41:19.713463+0000 osd.0 (osd.0) 64 : cluster [DBG] 5.2 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:41:19.727608+0000 osd.0 (osd.0) 65 : cluster [DBG] 5.2 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 64 handle_osd_map epochs [64,65], i have 64, src has [1,65]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 64 handle_osd_map epochs [65,65], i have 65, src has [1,65]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.008401 3 0.000063
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.012837 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 activating+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.010845 3 0.000070
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.013796 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 activating+remapped mbc={255={(0+1)=3}}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.010408 3 0.000039
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.014596 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.010634 3 0.000133
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.015180 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 65 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 30.474411 53 0.000119
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 65 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 30.482821 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 65 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 31.480190 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 65 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 31.480202 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 65 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 65 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=65 pruub=9.525791168s) [2] r=-1 lpr=65 pi=[43,65)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 128.621459961s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 65 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=65 pruub=9.525757790s) [2] r=-1 lpr=65 pi=[43,65)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.621459961s@ mbc={}] exit Reset 0.000048 1 0.000069
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 65 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=65 pruub=9.525757790s) [2] r=-1 lpr=65 pi=[43,65)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.621459961s@ mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 65 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=65 pruub=9.525757790s) [2] r=-1 lpr=65 pi=[43,65)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.621459961s@ mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 65 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=65 pruub=9.525757790s) [2] r=-1 lpr=65 pi=[43,65)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.621459961s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 65 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=65 pruub=9.525757790s) [2] r=-1 lpr=65 pi=[43,65)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.621459961s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 65 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=65 pruub=9.525757790s) [2] r=-1 lpr=65 pi=[43,65)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.621459961s@ mbc={}] enter Started/Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/Activating 0.005159 5 0.000182
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000051 1 0.000043
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] exit Started/Primary/Active/Activating 0.007315 5 0.000179
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.007093 5 0.000237
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.007367 5 0.000217
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.038243 1 0.000029
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Recovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.049534 2 0.000043
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.084513 1 0.000022
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.029412 1 0.000028
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=3}}] enter Started/Primary/Active/Recovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.021325 2 0.000075
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.135216 1 0.000027
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.035983 1 0.000040
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 64684032 unmapped: 1318912 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.035423 2 0.000049
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.206604 1 0.000021
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.043224 1 0.000035
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.035381 2 0.000049
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:51.020182+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 67 sent 65 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:41:20.723139+0000 osd.0 (osd.0) 66 : cluster [DBG] 3.1f scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:41:20.736940+0000 osd.0 (osd.0) 67 : cluster [DBG] 3.1f scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 65 handle_osd_map epochs [66,66], i have 65, src has [1,66]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 67) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:41:20.723139+0000 osd.0 (osd.0) 66 : cluster [DBG] 3.1f scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:41:20.736940+0000 osd.0 (osd.0) 67 : cluster [DBG] 3.1f scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 65 handle_osd_map epochs [66,66], i have 66, src has [1,66]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 66 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.712500 1 0.000073
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 66 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active 1.005218 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 66 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary 2.019855 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 66 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started 2.019880 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 66 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 66 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66 pruub=15.001863480s) [2] async=[2] r=-1 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 44'389 active pruub 135.102462769s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 66 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66 pruub=15.001706123s) [2] r=-1 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 135.102462769s@ mbc={}] exit Reset 0.000194 1 0.000291
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 66 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66 pruub=15.001706123s) [2] r=-1 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 135.102462769s@ mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 66 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66 pruub=15.001706123s) [2] r=-1 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 135.102462769s@ mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 66 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66 pruub=15.001706123s) [2] r=-1 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 135.102462769s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 66 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66 pruub=15.001706123s) [2] r=-1 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 135.102462769s@ mbc={}] exit Start 0.000080 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 66 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66 pruub=15.001706123s) [2] r=-1 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 135.102462769s@ mbc={}] enter Started/Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 66 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.914089 1 0.000155
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 66 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.791781 1 0.000117
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 66 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active 1.005692 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 66 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary 2.020897 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 66 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started 2.020923 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 66 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 66 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66 pruub=15.001188278s) [2] async=[2] r=-1 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 44'389 active pruub 135.102416992s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 66 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66 pruub=15.001043320s) [2] r=-1 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 135.102416992s@ mbc={}] exit Reset 0.000174 1 0.000258
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 66 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66 pruub=15.001043320s) [2] r=-1 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 135.102416992s@ mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 66 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66 pruub=15.001043320s) [2] r=-1 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 135.102416992s@ mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 66 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66 pruub=15.001043320s) [2] r=-1 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 135.102416992s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 66 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66 pruub=15.001043320s) [2] r=-1 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 135.102416992s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 66 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66 pruub=15.001043320s) [2] r=-1 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 135.102416992s@ mbc={}] enter Started/Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 66 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active 1.007608 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 66 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary 2.020458 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 66 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started 2.020490 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 66 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 66 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.863486 1 0.000142
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 66 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active 1.006400 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 66 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary 2.020215 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 66 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started 2.020288 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 66 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[53,64)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 66 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66 pruub=15.000925064s) [2] async=[2] r=-1 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 44'389 active pruub 135.102462769s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 66 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66 pruub=15.000885963s) [2] r=-1 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 135.102462769s@ mbc={}] exit Reset 0.000055 1 0.000229
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 66 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66 pruub=15.000885963s) [2] r=-1 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 135.102462769s@ mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 66 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66 pruub=15.000885963s) [2] r=-1 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 135.102462769s@ mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 66 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66 pruub=15.000885963s) [2] r=-1 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 135.102462769s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 66 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66 pruub=15.000885963s) [2] r=-1 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 135.102462769s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 66 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66 pruub=15.000885963s) [2] r=-1 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 135.102462769s@ mbc={}] enter Started/Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 66 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66 pruub=14.997325897s) [2] async=[2] r=-1 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 44'389 active pruub 135.098831177s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 66 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66 pruub=14.996876717s) [2] r=-1 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 135.098831177s@ mbc={}] exit Reset 0.000469 1 0.000856
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 66 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66 pruub=14.996876717s) [2] r=-1 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 135.098831177s@ mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 66 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66 pruub=14.996876717s) [2] r=-1 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 135.098831177s@ mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 66 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66 pruub=14.996876717s) [2] r=-1 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 135.098831177s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 66 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66 pruub=14.996876717s) [2] r=-1 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 135.098831177s@ mbc={}] exit Start 0.000100 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 66 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66 pruub=14.996876717s) [2] r=-1 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 135.098831177s@ mbc={}] enter Started/Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 66 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=65) [2] r=-1 lpr=65 pi=[43,65)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.012142 7 0.000055
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 66 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=65) [2] r=-1 lpr=65 pi=[43,65)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 66 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=65) [2] r=-1 lpr=65 pi=[43,65)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 66 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=65) [2] r=-1 lpr=65 pi=[43,65)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000068 1 0.000074
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 66 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=65) [2] r=-1 lpr=65 pi=[43,65)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 66 pg[6.8( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=65) [2] r=-1 lpr=65 DELETING pi=[43,65)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.000999 1 0.000083
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 66 pg[6.8( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=65) [2] r=-1 lpr=65 pi=[43,65)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.001132 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 66 pg[6.8( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=65) [2] r=-1 lpr=65 pi=[43,65)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.013320 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 66 heartbeat osd_stat(store_statfs(0x4fe0fe000/0x0/0x4ffc00000, data 0x59706/0xcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 64749568 unmapped: 1253376 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 662422 data_alloc: 218103808 data_used: 90112
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:52.020372+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 66 handle_osd_map epochs [67,67], i have 66, src has [1,67]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.252840042s of 10.359681129s, submitted: 101
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 67 pg[6.9(unlocked)] enter Initial
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 67 pg[6.9( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=67) [0] r=0 lpr=0 pi=[50,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000060 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 67 pg[6.9( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=67) [0] r=0 lpr=0 pi=[50,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 67 pg[6.9( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=67) [0] r=0 lpr=67 pi=[50,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000035 1 0.000059
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 67 pg[6.9( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=67) [0] r=0 lpr=67 pi=[50,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 67 pg[6.9( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=67) [0] r=0 lpr=67 pi=[50,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 67 pg[6.9( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=67) [0] r=0 lpr=67 pi=[50,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 67 pg[6.9( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=67) [0] r=0 lpr=67 pi=[50,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000083 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 67 pg[6.9( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=67) [0] r=0 lpr=67 pi=[50,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 67 pg[6.9( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=67) [0] r=0 lpr=67 pi=[50,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 67 pg[6.9( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=67) [0] r=0 lpr=67 pi=[50,67)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 67 pg[6.9( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=67) [0] r=0 lpr=67 pi=[50,67)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000118 1 0.000207
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 67 pg[6.9( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=67) [0] r=0 lpr=67 pi=[50,67)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 67 pg[6.9( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=67) [0] r=0 lpr=67 pi=[50,67)/1 crt=37'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001430 2 0.000055
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 67 pg[6.9( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=67) [0] r=0 lpr=67 pi=[50,67)/1 crt=37'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 67 pg[6.9( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=67) [0] r=0 lpr=67 pi=[50,67)/1 crt=37'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000019 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 67 pg[6.9( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=67) [0] r=0 lpr=67 pi=[50,67)/1 crt=37'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 67 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=-1 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.009349 7 0.000211
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 67 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=-1 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 67 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=-1 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 67 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=-1 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.008680 7 0.000867
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 67 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=-1 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 67 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=-1 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 67 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=-1 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000033 1 0.000032
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 67 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=-1 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 67 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=-1 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.008971 7 0.000265
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 67 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=-1 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000097 1 0.000013
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 67 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=-1 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 67 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=-1 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 67 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=-1 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 67 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=-1 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000060 1 0.000071
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 67 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=-1 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 67 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=-1 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.007931 7 0.000972
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 67 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=-1 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 67 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=-1 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 67 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=-1 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000064 1 0.000920
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 67 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=-1 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 67 pg[9.1f( v 44'389 (0'0,44'389] lb MIN local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=-1 lpr=66 DELETING pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.062931 2 0.000175
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 67 pg[9.1f( v 44'389 (0'0,44'389] lb MIN local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=-1 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.062996 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 67 pg[9.1f( v 44'389 (0'0,44'389] lb MIN local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=-1 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.072487 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 67 pg[9.17( v 44'389 (0'0,44'389] lb MIN local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=-1 lpr=66 DELETING pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.085080 2 0.000067
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 67 pg[9.17( v 44'389 (0'0,44'389] lb MIN local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=-1 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.085219 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 67 pg[9.17( v 44'389 (0'0,44'389] lb MIN local-lis/les=64/65 n=5 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=-1 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.093934 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 67 pg[9.7( v 44'389 (0'0,44'389] lb MIN local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=-1 lpr=66 DELETING pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.122070 2 0.000086
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 67 pg[9.7( v 44'389 (0'0,44'389] lb MIN local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=-1 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.122180 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 67 pg[9.7( v 44'389 (0'0,44'389] lb MIN local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=-1 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.131188 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 1196032 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 67 pg[9.f( v 44'389 (0'0,44'389] lb MIN local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=-1 lpr=66 DELETING pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.173191 2 0.000128
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 67 pg[9.f( v 44'389 (0'0,44'389] lb MIN local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=-1 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.173310 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 67 pg[9.f( v 44'389 (0'0,44'389] lb MIN local-lis/les=64/65 n=6 ec=45/34 lis/c=64/53 les/c/f=65/54/0 sis=66) [2] r=-1 lpr=66 pi=[53,66)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.182363 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:53.020484+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 69 sent 67 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:41:22.726035+0000 osd.0 (osd.0) 68 : cluster [DBG] 3.1b scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:41:22.740111+0000 osd.0 (osd.0) 69 : cluster [DBG] 3.1b scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 67 handle_osd_map epochs [67,68], i have 67, src has [1,68]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 67 handle_osd_map epochs [68,68], i have 68, src has [1,68]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 69) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:41:22.726035+0000 osd.0 (osd.0) 68 : cluster [DBG] 3.1b scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:41:22.740111+0000 osd.0 (osd.0) 69 : cluster [DBG] 3.1b scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 68 pg[6.9( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=67) [0] r=0 lpr=67 pi=[50,67)/1 crt=37'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.004546 2 0.000100
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 68 pg[6.9( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=67) [0] r=0 lpr=67 pi=[50,67)/1 crt=37'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.006205 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 68 pg[6.9( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=67) [0] r=0 lpr=67 pi=[50,67)/1 crt=37'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 68 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=67/68 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=67) [0] r=0 lpr=67 pi=[50,67)/1 crt=37'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 68 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=67/68 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=67) [0] r=0 lpr=67 pi=[50,67)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 68 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=67/68 n=1 ec=43/21 lis/c=67/50 les/c/f=68/51/0 sis=67) [0] r=0 lpr=67 pi=[50,67)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001838 4 0.000136
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 68 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=67/68 n=1 ec=43/21 lis/c=67/50 les/c/f=68/51/0 sis=67) [0] r=0 lpr=67 pi=[50,67)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 68 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=67/68 n=1 ec=43/21 lis/c=67/50 les/c/f=68/51/0 sis=67) [0] r=0 lpr=67 pi=[50,67)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 68 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=67/68 n=1 ec=43/21 lis/c=67/50 les/c/f=68/51/0 sis=67) [0] r=0 lpr=67 pi=[50,67)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 1187840 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:54.020653+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 68 handle_osd_map epochs [68,69], i have 68, src has [1,69]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 69 pg[6.a(unlocked)] enter Initial
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 69 pg[6.a( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=55/55 les/c/f=56/56/0 sis=69) [0] r=0 lpr=0 pi=[55,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000034 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 69 pg[6.a( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=55/55 les/c/f=56/56/0 sis=69) [0] r=0 lpr=0 pi=[55,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 69 pg[6.a( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=55/55 les/c/f=56/56/0 sis=69) [0] r=0 lpr=69 pi=[55,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000013
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 69 pg[6.a( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=55/55 les/c/f=56/56/0 sis=69) [0] r=0 lpr=69 pi=[55,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 69 pg[6.a( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=55/55 les/c/f=56/56/0 sis=69) [0] r=0 lpr=69 pi=[55,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 69 pg[6.a( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=55/55 les/c/f=56/56/0 sis=69) [0] r=0 lpr=69 pi=[55,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 69 pg[6.a( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=55/55 les/c/f=56/56/0 sis=69) [0] r=0 lpr=69 pi=[55,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 69 pg[6.a( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=55/55 les/c/f=56/56/0 sis=69) [0] r=0 lpr=69 pi=[55,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 69 pg[6.a( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=55/55 les/c/f=56/56/0 sis=69) [0] r=0 lpr=69 pi=[55,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 69 pg[6.a( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=55/55 les/c/f=56/56/0 sis=69) [0] r=0 lpr=69 pi=[55,69)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 69 pg[6.a( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=55/55 les/c/f=56/56/0 sis=69) [0] r=0 lpr=69 pi=[55,69)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000085 1 0.000047
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 69 pg[6.a( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=55/55 les/c/f=56/56/0 sis=69) [0] r=0 lpr=69 pi=[55,69)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 69 handle_osd_map epochs [69,69], i have 69, src has [1,69]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 69 pg[6.a( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=55/56 n=1 ec=43/21 lis/c=55/55 les/c/f=56/56/0 sis=69) [0] r=0 lpr=69 pi=[55,69)/1 crt=37'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000824 2 0.000030
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 69 pg[6.a( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=55/56 n=1 ec=43/21 lis/c=55/55 les/c/f=56/56/0 sis=69) [0] r=0 lpr=69 pi=[55,69)/1 crt=37'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 69 pg[6.a( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=55/56 n=1 ec=43/21 lis/c=55/55 les/c/f=56/56/0 sis=69) [0] r=0 lpr=69 pi=[55,69)/1 crt=37'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000015 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 69 pg[6.a( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=55/56 n=1 ec=43/21 lis/c=55/55 les/c/f=56/56/0 sis=69) [0] r=0 lpr=69 pi=[55,69)/1 crt=37'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 1146880 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:55.020766+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 69 handle_osd_map epochs [69,70], i have 69, src has [1,70]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 69 handle_osd_map epochs [69,70], i have 70, src has [1,70]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 70 pg[6.a( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=55/56 n=1 ec=43/21 lis/c=55/55 les/c/f=56/56/0 sis=69) [0] r=0 lpr=69 pi=[55,69)/1 crt=37'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.897360 2 0.000128
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 70 pg[6.a( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=55/56 n=1 ec=43/21 lis/c=55/55 les/c/f=56/56/0 sis=69) [0] r=0 lpr=69 pi=[55,69)/1 crt=37'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.898342 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 70 pg[6.a( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=55/56 n=1 ec=43/21 lis/c=55/55 les/c/f=56/56/0 sis=69) [0] r=0 lpr=69 pi=[55,69)/1 crt=37'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 70 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=69/70 n=1 ec=43/21 lis/c=55/55 les/c/f=56/56/0 sis=69) [0] r=0 lpr=69 pi=[55,69)/1 crt=37'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 70 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=69/70 n=1 ec=43/21 lis/c=55/55 les/c/f=56/56/0 sis=69) [0] r=0 lpr=69 pi=[55,69)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 70 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=69/70 n=1 ec=43/21 lis/c=69/55 les/c/f=70/56/0 sis=69) [0] r=0 lpr=69 pi=[55,69)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001096 3 0.000099
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 70 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=69/70 n=1 ec=43/21 lis/c=69/55 les/c/f=70/56/0 sis=69) [0] r=0 lpr=69 pi=[55,69)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 70 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=69/70 n=1 ec=43/21 lis/c=69/55 les/c/f=70/56/0 sis=69) [0] r=0 lpr=69 pi=[55,69)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 70 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=69/70 n=1 ec=43/21 lis/c=69/55 les/c/f=70/56/0 sis=69) [0] r=0 lpr=69 pi=[55,69)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 70 handle_osd_map epochs [70,70], i have 70, src has [1,70]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 64905216 unmapped: 1097728 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 70 heartbeat osd_stat(store_statfs(0x4fe0f2000/0x0/0x4ffc00000, data 0x61f41/0xd9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:56.020917+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 64921600 unmapped: 1081344 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 641959 data_alloc: 218103808 data_used: 77824
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 3.a scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 3.a scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:57.021018+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 71 sent 69 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:41:26.763455+0000 osd.0 (osd.0) 70 : cluster [DBG] 3.a scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:41:26.777563+0000 osd.0 (osd.0) 71 : cluster [DBG] 3.a scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 71) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:41:26.763455+0000 osd.0 (osd.0) 70 : cluster [DBG] 3.a scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:41:26.777563+0000 osd.0 (osd.0) 71 : cluster [DBG] 3.a scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 64946176 unmapped: 1056768 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:58.021154+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 70 handle_osd_map epochs [71,72], i have 70, src has [1,72]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 71 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=57) [0] r=0 lpr=58 crt=37'39 mlcod 37'39 active+clean] exit Started/Primary/Active/Clean 14.900462 33 0.000108
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 71 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=57) [0] r=0 lpr=58 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active 15.388307 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 71 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=57) [0] r=0 lpr=58 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary 16.394264 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 71 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=57) [0] r=0 lpr=58 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started 16.394279 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 71 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=57) [0] r=0 lpr=58 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 71 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=71 pruub=8.613817215s) [1] r=-1 lpr=71 pi=[57,71)/1 crt=37'39 mlcod 37'39 active pruub 135.751876831s@ mbc={255={}}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 72 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=71 pruub=8.613759995s) [1] r=-1 lpr=71 pi=[57,71)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 135.751876831s@ mbc={}] exit Reset 0.000078 2 0.000116
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 72 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=71 pruub=8.613759995s) [1] r=-1 lpr=71 pi=[57,71)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 135.751876831s@ mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 72 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=71 pruub=8.613759995s) [1] r=-1 lpr=71 pi=[57,71)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 135.751876831s@ mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 72 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=71 pruub=8.613759995s) [1] r=-1 lpr=71 pi=[57,71)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 135.751876831s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 72 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=71 pruub=8.613759995s) [1] r=-1 lpr=71 pi=[57,71)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 135.751876831s@ mbc={}] exit Start 0.000009 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 72 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=71 pruub=8.613759995s) [1] r=-1 lpr=71 pi=[57,71)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 135.751876831s@ mbc={}] enter Started/Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 72 handle_osd_map epochs [71,72], i have 72, src has [1,72]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 72 heartbeat osd_stat(store_statfs(0x4fe0f5000/0x0/0x4ffc00000, data 0x61f41/0xd9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 64970752 unmapped: 1032192 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:40:59.021681+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 73 sent 71 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:41:28.836155+0000 osd.0 (osd.0) 72 : cluster [DBG] 3.6 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:41:28.850284+0000 osd.0 (osd.0) 73 : cluster [DBG] 3.6 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 73) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:41:28.836155+0000 osd.0 (osd.0) 72 : cluster [DBG] 3.6 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:41:28.850284+0000 osd.0 (osd.0) 73 : cluster [DBG] 3.6 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 72 handle_osd_map epochs [73,73], i have 72, src has [1,73]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 73 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=71) [1] r=-1 lpr=71 pi=[57,71)/1 crt=37'39 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.018042 6 0.000085
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 73 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=71) [1] r=-1 lpr=71 pi=[57,71)/1 crt=37'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 73 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=71) [1] r=-1 lpr=71 pi=[57,71)/1 crt=37'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 73 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=71) [1] r=-1 lpr=71 pi=[57,71)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.005586 3 0.000030
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 73 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=71) [1] r=-1 lpr=71 pi=[57,71)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.005625 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 73 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=71) [1] r=-1 lpr=71 pi=[57,71)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 73 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=71) [1] r=-1 lpr=71 pi=[57,71)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 73 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=71) [1] r=-1 lpr=71 pi=[57,71)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000068 1 0.000100
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 73 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=71) [1] r=-1 lpr=71 pi=[57,71)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 73 pg[6.b( v 37'39 (0'0,37'39] lb MIN local-lis/les=57/59 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=71) [1] r=-1 lpr=71 DELETING pi=[57,71)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.008681 2 0.000119
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 73 pg[6.b( v 37'39 (0'0,37'39] lb MIN local-lis/les=57/59 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=71) [1] r=-1 lpr=71 pi=[57,71)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.008790 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 73 pg[6.b( v 37'39 (0'0,37'39] lb MIN local-lis/les=57/59 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=71) [1] r=-1 lpr=71 pi=[57,71)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started 1.032521 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 65052672 unmapped: 950272 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 3.f scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 3.f scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:00.021808+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 75 sent 73 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:41:29.798890+0000 osd.0 (osd.0) 74 : cluster [DBG] 3.f scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:41:29.813106+0000 osd.0 (osd.0) 75 : cluster [DBG] 3.f scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 75) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:41:29.798890+0000 osd.0 (osd.0) 74 : cluster [DBG] 3.f scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:41:29.813106+0000 osd.0 (osd.0) 75 : cluster [DBG] 3.f scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 65069056 unmapped: 933888 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 73 handle_osd_map epochs [74,74], i have 73, src has [1,74]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:01.021928+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 860160 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 654604 data_alloc: 218103808 data_used: 98304
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 74 handle_osd_map epochs [75,75], i have 74, src has [1,75]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:02.022015+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 75 handle_osd_map epochs [75,76], i have 75, src has [1,76]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.991311073s of 10.069980621s, submitted: 59
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 65191936 unmapped: 811008 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:03.022103+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 65200128 unmapped: 802816 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 3.c scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 3.c scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:04.022191+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 77 sent 75 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:41:33.849583+0000 osd.0 (osd.0) 76 : cluster [DBG] 3.c scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:41:33.863715+0000 osd.0 (osd.0) 77 : cluster [DBG] 3.c scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 76 heartbeat osd_stat(store_statfs(0x4fe0e1000/0x0/0x4ffc00000, data 0x6c305/0xeb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 77) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:41:33.849583+0000 osd.0 (osd.0) 76 : cluster [DBG] 3.c scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:41:33.863715+0000 osd.0 (osd.0) 77 : cluster [DBG] 3.c scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 65200128 unmapped: 802816 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:05.022326+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 761856 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:06.022436+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 761856 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 662015 data_alloc: 218103808 data_used: 106496
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:07.022566+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 79 sent 77 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:41:36.825798+0000 osd.0 (osd.0) 78 : cluster [DBG] 3.3 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:41:36.839868+0000 osd.0 (osd.0) 79 : cluster [DBG] 3.3 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 79) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:41:36.825798+0000 osd.0 (osd.0) 78 : cluster [DBG] 3.3 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:41:36.839868+0000 osd.0 (osd.0) 79 : cluster [DBG] 3.3 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 761856 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:08.022704+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 81 sent 79 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:41:37.846627+0000 osd.0 (osd.0) 80 : cluster [DBG] 3.15 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:41:37.860730+0000 osd.0 (osd.0) 81 : cluster [DBG] 3.15 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 81) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:41:37.846627+0000 osd.0 (osd.0) 80 : cluster [DBG] 3.15 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:41:37.860730+0000 osd.0 (osd.0) 81 : cluster [DBG] 3.15 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 76 heartbeat osd_stat(store_statfs(0x4fe0e3000/0x0/0x4ffc00000, data 0x6c305/0xeb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 76 handle_osd_map epochs [77,77], i have 76, src has [1,77]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 76 handle_osd_map epochs [77,78], i have 77, src has [1,78]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 77 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=60/61 n=1 ec=43/21 lis/c=60/60 les/c/f=61/61/0 sis=60) [0] r=0 lpr=60 crt=37'39 mlcod 37'39 active+clean] exit Started/Primary/Active/Clean 23.394454 44 0.000247
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 77 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=60/61 n=1 ec=43/21 lis/c=60/60 les/c/f=61/61/0 sis=60) [0] r=0 lpr=60 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active 23.463967 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 77 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=60/61 n=1 ec=43/21 lis/c=60/60 les/c/f=61/61/0 sis=60) [0] r=0 lpr=60 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary 24.466026 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 77 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=60/61 n=1 ec=43/21 lis/c=60/60 les/c/f=61/61/0 sis=60) [0] r=0 lpr=60 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started 24.466044 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 77 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=60/61 n=1 ec=43/21 lis/c=60/60 les/c/f=61/61/0 sis=60) [0] r=0 lpr=60 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 77 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=60/61 n=1 ec=43/21 lis/c=60/60 les/c/f=61/61/0 sis=77 pruub=8.538226128s) [1] r=-1 lpr=77 pi=[60,77)/1 crt=37'39 mlcod 37'39 active pruub 145.758438110s@ mbc={255={}}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 77 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=60/61 n=1 ec=43/21 lis/c=60/60 les/c/f=61/61/0 sis=77 pruub=8.538172722s) [1] r=-1 lpr=77 pi=[60,77)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 145.758438110s@ mbc={}] exit Reset 0.000076 1 0.000114
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 77 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=60/61 n=1 ec=43/21 lis/c=60/60 les/c/f=61/61/0 sis=77 pruub=8.538172722s) [1] r=-1 lpr=77 pi=[60,77)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 145.758438110s@ mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 77 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=60/61 n=1 ec=43/21 lis/c=60/60 les/c/f=61/61/0 sis=77 pruub=8.538172722s) [1] r=-1 lpr=77 pi=[60,77)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 145.758438110s@ mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 77 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=60/61 n=1 ec=43/21 lis/c=60/60 les/c/f=61/61/0 sis=77 pruub=8.538172722s) [1] r=-1 lpr=77 pi=[60,77)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 145.758438110s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 77 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=60/61 n=1 ec=43/21 lis/c=60/60 les/c/f=61/61/0 sis=77 pruub=8.538172722s) [1] r=-1 lpr=77 pi=[60,77)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 145.758438110s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 77 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=60/61 n=1 ec=43/21 lis/c=60/60 les/c/f=61/61/0 sis=77 pruub=8.538172722s) [1] r=-1 lpr=77 pi=[60,77)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 145.758438110s@ mbc={}] enter Started/Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 753664 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:09.022845+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 78 handle_osd_map epochs [78,79], i have 78, src has [1,79]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 79 handle_osd_map epochs [78,79], i have 79, src has [1,79]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 79 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=60/61 n=1 ec=43/21 lis/c=60/60 les/c/f=61/61/0 sis=77) [1] r=-1 lpr=77 pi=[60,77)/1 crt=37'39 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.977496 10 0.000080
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 79 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=60/61 n=1 ec=43/21 lis/c=60/60 les/c/f=61/61/0 sis=77) [1] r=-1 lpr=77 pi=[60,77)/1 crt=37'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 79 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=60/61 n=1 ec=43/21 lis/c=60/60 les/c/f=61/61/0 sis=77) [1] r=-1 lpr=77 pi=[60,77)/1 crt=37'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 79 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=60/61 n=1 ec=43/21 lis/c=60/60 les/c/f=61/61/0 sis=77) [1] r=-1 lpr=77 pi=[60,77)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.069112 2 0.000090
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 79 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=60/61 n=1 ec=43/21 lis/c=60/60 les/c/f=61/61/0 sis=77) [1] r=-1 lpr=77 pi=[60,77)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.069179 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 79 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=60/61 n=1 ec=43/21 lis/c=60/60 les/c/f=61/61/0 sis=77) [1] r=-1 lpr=77 pi=[60,77)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 79 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=60/61 n=1 ec=43/21 lis/c=60/60 les/c/f=61/61/0 sis=77) [1] r=-1 lpr=77 pi=[60,77)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 79 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=60/61 n=1 ec=43/21 lis/c=60/60 les/c/f=61/61/0 sis=77) [1] r=-1 lpr=77 pi=[60,77)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000057 1 0.000066
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 79 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=60/61 n=1 ec=43/21 lis/c=60/60 les/c/f=61/61/0 sis=77) [1] r=-1 lpr=77 pi=[60,77)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 79 pg[6.d( v 37'39 (0'0,37'39] lb MIN local-lis/les=60/61 n=1 ec=43/21 lis/c=60/60 les/c/f=61/61/0 sis=77) [1] r=-1 lpr=77 DELETING pi=[60,77)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.016073 2 0.000112
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 79 pg[6.d( v 37'39 (0'0,37'39] lb MIN local-lis/les=60/61 n=1 ec=43/21 lis/c=60/60 les/c/f=61/61/0 sis=77) [1] r=-1 lpr=77 pi=[60,77)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.016155 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 79 pg[6.d( v 37'39 (0'0,37'39] lb MIN local-lis/les=60/61 n=1 ec=43/21 lis/c=60/60 les/c/f=61/61/0 sis=77) [1] r=-1 lpr=77 pi=[60,77)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started 1.062877 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 704512 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:10.022956+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 704512 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:11.023067+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 65339392 unmapped: 663552 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 672014 data_alloc: 218103808 data_used: 114688
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:12.023188+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 65388544 unmapped: 614400 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:13.023308+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 65388544 unmapped: 614400 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 79 heartbeat osd_stat(store_statfs(0x4fe0d8000/0x0/0x4ffc00000, data 0x71722/0xf3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.668744087s of 11.686881065s, submitted: 17
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:14.023442+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 83 sent 81 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:41:43.847454+0000 osd.0 (osd.0) 82 : cluster [DBG] 3.9 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:41:43.861563+0000 osd.0 (osd.0) 83 : cluster [DBG] 3.9 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 65404928 unmapped: 598016 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 83) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:41:43.847454+0000 osd.0 (osd.0) 82 : cluster [DBG] 3.9 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:41:43.861563+0000 osd.0 (osd.0) 83 : cluster [DBG] 3.9 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:15.023616+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 85 sent 83 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:41:44.828583+0000 osd.0 (osd.0) 84 : cluster [DBG] 3.17 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:41:44.842675+0000 osd.0 (osd.0) 85 : cluster [DBG] 3.17 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 65413120 unmapped: 589824 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 85) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:41:44.828583+0000 osd.0 (osd.0) 84 : cluster [DBG] 3.17 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:41:44.842675+0000 osd.0 (osd.0) 85 : cluster [DBG] 3.17 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:16.023757+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 65429504 unmapped: 573440 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 672885 data_alloc: 218103808 data_used: 135168
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 79 handle_osd_map epochs [79,80], i have 79, src has [1,80]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 80 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=57) [0] r=0 lpr=58 crt=37'39 mlcod 37'39 active+clean] exit Started/Primary/Active/Clean 33.456799 61 0.000124
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 80 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=57) [0] r=0 lpr=58 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active 33.665224 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 80 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=57) [0] r=0 lpr=58 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary 34.671018 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 80 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=57) [0] r=0 lpr=58 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started 34.671108 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 80 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=57) [0] r=0 lpr=58 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 80 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=80 pruub=14.336933136s) [2] r=-1 lpr=80 pi=[57,80)/1 crt=37'39 mlcod 37'39 active pruub 159.751373291s@ mbc={255={}}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 80 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=80 pruub=14.336879730s) [2] r=-1 lpr=80 pi=[57,80)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 159.751373291s@ mbc={}] exit Reset 0.000079 1 0.000118
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 80 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=80 pruub=14.336879730s) [2] r=-1 lpr=80 pi=[57,80)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 159.751373291s@ mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 80 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=80 pruub=14.336879730s) [2] r=-1 lpr=80 pi=[57,80)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 159.751373291s@ mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 80 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=80 pruub=14.336879730s) [2] r=-1 lpr=80 pi=[57,80)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 159.751373291s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 80 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=80 pruub=14.336879730s) [2] r=-1 lpr=80 pi=[57,80)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 159.751373291s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 80 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=80 pruub=14.336879730s) [2] r=-1 lpr=80 pi=[57,80)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 159.751373291s@ mbc={}] enter Started/Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 80 heartbeat osd_stat(store_statfs(0x4fe0db000/0x0/0x4ffc00000, data 0x71722/0xf3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:17.023848+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 65478656 unmapped: 524288 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 80 handle_osd_map epochs [80,81], i have 80, src has [1,81]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 81 handle_osd_map epochs [81,81], i have 81, src has [1,81]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 81 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=80) [2] r=-1 lpr=80 pi=[57,80)/1 crt=37'39 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.011350 7 0.000079
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 81 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=80) [2] r=-1 lpr=80 pi=[57,80)/1 crt=37'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 81 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=80) [2] r=-1 lpr=80 pi=[57,80)/1 crt=37'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 81 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=80) [2] r=-1 lpr=80 pi=[57,80)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.128868 2 0.000064
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 81 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=80) [2] r=-1 lpr=80 pi=[57,80)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.128905 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 81 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=80) [2] r=-1 lpr=80 pi=[57,80)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 81 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=80) [2] r=-1 lpr=80 pi=[57,80)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 81 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=80) [2] r=-1 lpr=80 pi=[57,80)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000055 1 0.000088
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 81 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=57/59 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=80) [2] r=-1 lpr=80 pi=[57,80)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 81 pg[6.f( v 37'39 (0'0,37'39] lb MIN local-lis/les=57/59 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=80) [2] r=-1 lpr=80 DELETING pi=[57,80)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.023592 2 0.000156
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 81 pg[6.f( v 37'39 (0'0,37'39] lb MIN local-lis/les=57/59 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=80) [2] r=-1 lpr=80 pi=[57,80)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.023705 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 81 pg[6.f( v 37'39 (0'0,37'39] lb MIN local-lis/les=57/59 n=1 ec=43/21 lis/c=57/57 les/c/f=59/59/0 sis=80) [2] r=-1 lpr=80 pi=[57,80)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started 1.164042 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:18.023973+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 65544192 unmapped: 458752 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:19.024067+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 65544192 unmapped: 458752 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 81 handle_osd_map epochs [81,82], i have 81, src has [1,82]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:20.024204+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 87 sent 85 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:41:49.871942+0000 osd.0 (osd.0) 86 : cluster [DBG] 3.12 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:41:49.886038+0000 osd.0 (osd.0) 87 : cluster [DBG] 3.12 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 65585152 unmapped: 417792 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 87) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:41:49.871942+0000 osd.0 (osd.0) 86 : cluster [DBG] 3.12 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:41:49.886038+0000 osd.0 (osd.0) 87 : cluster [DBG] 3.12 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:21.024500+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 89 sent 87 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:41:50.876282+0000 osd.0 (osd.0) 88 : cluster [DBG] 7.1b scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:41:50.890342+0000 osd.0 (osd.0) 89 : cluster [DBG] 7.1b scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 65585152 unmapped: 417792 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 679271 data_alloc: 218103808 data_used: 143360
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 89) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:41:50.876282+0000 osd.0 (osd.0) 88 : cluster [DBG] 7.1b scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:41:50.890342+0000 osd.0 (osd.0) 89 : cluster [DBG] 7.1b scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:22.024802+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 82 heartbeat osd_stat(store_statfs(0x4fe0d3000/0x0/0x4ffc00000, data 0x768e0/0xfa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 82 handle_osd_map epochs [83,83], i have 82, src has [1,83]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 82 handle_osd_map epochs [83,83], i have 83, src has [1,83]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 65634304 unmapped: 368640 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:23.024937+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 65634304 unmapped: 368640 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 83 handle_osd_map epochs [83,84], i have 83, src has [1,84]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 84 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=44'389 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 49.737518 88 0.000138
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 84 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active 49.740407 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 84 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary 50.742592 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 84 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=44'389 mlcod 0'0 active mbc={}] exit Started 50.742608 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 84 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=44'389 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 84 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=84 pruub=14.262881279s) [2] r=-1 lpr=84 pi=[53,84)/1 crt=44'389 mlcod 0'0 active pruub 166.709609985s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 84 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=84 pruub=14.262837410s) [2] r=-1 lpr=84 pi=[53,84)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 166.709609985s@ mbc={}] exit Reset 0.000063 1 0.000094
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 84 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=84 pruub=14.262837410s) [2] r=-1 lpr=84 pi=[53,84)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 166.709609985s@ mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 84 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=84 pruub=14.262837410s) [2] r=-1 lpr=84 pi=[53,84)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 166.709609985s@ mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 84 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=84 pruub=14.262837410s) [2] r=-1 lpr=84 pi=[53,84)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 166.709609985s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 84 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=84 pruub=14.262837410s) [2] r=-1 lpr=84 pi=[53,84)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 166.709609985s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 84 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=84 pruub=14.262837410s) [2] r=-1 lpr=84 pi=[53,84)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 166.709609985s@ mbc={}] enter Started/Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 84 handle_osd_map epochs [84,84], i have 84, src has [1,84]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 8.14 deep-scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 8.14 deep-scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:24.025037+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 91 sent 89 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:41:53.819906+0000 osd.0 (osd.0) 90 : cluster [DBG] 8.14 deep-scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:41:53.834009+0000 osd.0 (osd.0) 91 : cluster [DBG] 8.14 deep-scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 65617920 unmapped: 385024 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 84 handle_osd_map epochs [85,85], i have 84, src has [1,85]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.558607101s of 10.603095055s, submitted: 37
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 91) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:41:53.819906+0000 osd.0 (osd.0) 90 : cluster [DBG] 8.14 deep-scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:41:53.834009+0000 osd.0 (osd.0) 91 : cluster [DBG] 8.14 deep-scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 85 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=84) [2] r=-1 lpr=84 pi=[53,84)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.010249 3 0.000060
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 85 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=84) [2] r=-1 lpr=84 pi=[53,84)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.010308 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 85 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=84) [2] r=-1 lpr=84 pi=[53,84)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 85 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=85) [2]/[0] r=0 lpr=85 pi=[53,85)/1 crt=44'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 85 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=85) [2]/[0] r=0 lpr=85 pi=[53,85)/1 crt=44'389 mlcod 0'0 remapped mbc={}] exit Reset 0.000058 1 0.000111
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 85 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=85) [2]/[0] r=0 lpr=85 pi=[53,85)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 85 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=85) [2]/[0] r=0 lpr=85 pi=[53,85)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 85 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=85) [2]/[0] r=0 lpr=85 pi=[53,85)/1 crt=44'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 85 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=85) [2]/[0] r=0 lpr=85 pi=[53,85)/1 crt=44'389 mlcod 0'0 remapped mbc={}] exit Start 0.000005 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 85 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=85) [2]/[0] r=0 lpr=85 pi=[53,85)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 85 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=85) [2]/[0] r=0 lpr=85 pi=[53,85)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 85 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=85) [2]/[0] r=0 lpr=85 pi=[53,85)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 85 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=85) [2]/[0] r=0 lpr=85 pi=[53,85)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000040 1 0.000045
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 85 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=85) [2]/[0] r=0 lpr=85 pi=[53,85)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 85 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=85) [2]/[0] async=[2] r=0 lpr=85 pi=[53,85)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000023 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 85 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=85) [2]/[0] async=[2] r=0 lpr=85 pi=[53,85)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 85 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=85) [2]/[0] async=[2] r=0 lpr=85 pi=[53,85)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 85 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=85) [2]/[0] async=[2] r=0 lpr=85 pi=[53,85)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:25.025235+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 65626112 unmapped: 376832 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 85 handle_osd_map epochs [85,86], i have 85, src has [1,86]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 85 handle_osd_map epochs [86,86], i have 86, src has [1,86]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 86 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=85) [2]/[0] async=[2] r=0 lpr=85 pi=[53,85)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.006122 4 0.000058
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 86 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=85) [2]/[0] async=[2] r=0 lpr=85 pi=[53,85)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.006232 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 86 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=85) [2]/[0] async=[2] r=0 lpr=85 pi=[53,85)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 86 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=85/86 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=85) [2]/[0] async=[2] r=0 lpr=85 pi=[53,85)/1 crt=44'389 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 86 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=85/86 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=85) [2]/[0] async=[2] r=0 lpr=85 pi=[53,85)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 86 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=85/86 n=5 ec=45/34 lis/c=85/53 les/c/f=86/54/0 sis=85) [2]/[0] async=[2] r=0 lpr=85 pi=[53,85)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.002289 5 0.000210
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 86 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=85/86 n=5 ec=45/34 lis/c=85/53 les/c/f=86/54/0 sis=85) [2]/[0] async=[2] r=0 lpr=85 pi=[53,85)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 86 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=85/86 n=5 ec=45/34 lis/c=85/53 les/c/f=86/54/0 sis=85) [2]/[0] async=[2] r=0 lpr=85 pi=[53,85)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000054 1 0.000055
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 86 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=85/86 n=5 ec=45/34 lis/c=85/53 les/c/f=86/54/0 sis=85) [2]/[0] async=[2] r=0 lpr=85 pi=[53,85)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 86 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=85/86 n=5 ec=45/34 lis/c=85/53 les/c/f=86/54/0 sis=85) [2]/[0] async=[2] r=0 lpr=85 pi=[53,85)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000393 1 0.000074
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 86 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=85/86 n=5 ec=45/34 lis/c=85/53 les/c/f=86/54/0 sis=85) [2]/[0] async=[2] r=0 lpr=85 pi=[53,85)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 86 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=85/86 n=5 ec=45/34 lis/c=85/53 les/c/f=86/54/0 sis=85) [2]/[0] async=[2] r=0 lpr=85 pi=[53,85)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.035366 2 0.000035
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 86 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=85/86 n=5 ec=45/34 lis/c=85/53 les/c/f=86/54/0 sis=85) [2]/[0] async=[2] r=0 lpr=85 pi=[53,85)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 86 heartbeat osd_stat(store_statfs(0x4fe0c9000/0x0/0x4ffc00000, data 0x7ba3f/0x103000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:26.025331+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 65650688 unmapped: 352256 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 692627 data_alloc: 218103808 data_used: 151552
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 86 handle_osd_map epochs [86,87], i have 86, src has [1,87]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 86 handle_osd_map epochs [87,87], i have 87, src has [1,87]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 87 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=85/86 n=5 ec=45/34 lis/c=85/53 les/c/f=86/54/0 sis=85) [2]/[0] async=[2] r=0 lpr=85 pi=[53,85)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.972881 1 0.000079
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 87 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=85/86 n=5 ec=45/34 lis/c=85/53 les/c/f=86/54/0 sis=85) [2]/[0] async=[2] r=0 lpr=85 pi=[53,85)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active 1.011194 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 87 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=85/86 n=5 ec=45/34 lis/c=85/53 les/c/f=86/54/0 sis=85) [2]/[0] async=[2] r=0 lpr=85 pi=[53,85)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary 2.017441 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 87 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=85/86 n=5 ec=45/34 lis/c=85/53 les/c/f=86/54/0 sis=85) [2]/[0] async=[2] r=0 lpr=85 pi=[53,85)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started 2.017466 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 87 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=85/86 n=5 ec=45/34 lis/c=85/53 les/c/f=86/54/0 sis=85) [2]/[0] async=[2] r=0 lpr=85 pi=[53,85)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 87 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=85/86 n=5 ec=45/34 lis/c=85/53 les/c/f=86/54/0 sis=87 pruub=14.991056442s) [2] async=[2] r=-1 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 44'389 active pruub 170.465728760s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 87 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=85/86 n=5 ec=45/34 lis/c=85/53 les/c/f=86/54/0 sis=87 pruub=14.990988731s) [2] r=-1 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 170.465728760s@ mbc={}] exit Reset 0.000103 1 0.000156
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 87 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=85/86 n=5 ec=45/34 lis/c=85/53 les/c/f=86/54/0 sis=87 pruub=14.990988731s) [2] r=-1 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 170.465728760s@ mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 87 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=85/86 n=5 ec=45/34 lis/c=85/53 les/c/f=86/54/0 sis=87 pruub=14.990988731s) [2] r=-1 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 170.465728760s@ mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 87 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=85/86 n=5 ec=45/34 lis/c=85/53 les/c/f=86/54/0 sis=87 pruub=14.990988731s) [2] r=-1 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 170.465728760s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 87 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=85/86 n=5 ec=45/34 lis/c=85/53 les/c/f=86/54/0 sis=87 pruub=14.990988731s) [2] r=-1 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 170.465728760s@ mbc={}] exit Start 0.000011 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 87 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=85/86 n=5 ec=45/34 lis/c=85/53 les/c/f=86/54/0 sis=87 pruub=14.990988731s) [2] r=-1 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 170.465728760s@ mbc={}] enter Started/Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 87 handle_osd_map epochs [87,87], i have 87, src has [1,87]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:27.025413+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 65658880 unmapped: 1392640 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _renew_subs
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 87 handle_osd_map epochs [88,88], i have 87, src has [1,88]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 88 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=85/86 n=5 ec=45/34 lis/c=85/53 les/c/f=86/54/0 sis=87) [2] r=-1 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.412574 6 0.000083
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 88 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=85/86 n=5 ec=45/34 lis/c=85/53 les/c/f=86/54/0 sis=87) [2] r=-1 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 88 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=85/86 n=5 ec=45/34 lis/c=85/53 les/c/f=86/54/0 sis=87) [2] r=-1 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 88 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=44'389 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 54.179477 100 0.000516
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 88 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active 54.182328 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 88 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary 55.182436 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 88 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=44'389 mlcod 0'0 active mbc={}] exit Started 55.182449 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 88 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=44'389 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 88 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=88 pruub=9.820838928s) [1] r=-1 lpr=88 pi=[53,88)/1 crt=44'389 mlcod 0'0 active pruub 166.708480835s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 88 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=88 pruub=9.820796967s) [1] r=-1 lpr=88 pi=[53,88)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 166.708480835s@ mbc={}] exit Reset 0.000071 1 0.000095
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 88 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=88 pruub=9.820796967s) [1] r=-1 lpr=88 pi=[53,88)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 166.708480835s@ mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 88 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=88 pruub=9.820796967s) [1] r=-1 lpr=88 pi=[53,88)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 166.708480835s@ mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 88 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=88 pruub=9.820796967s) [1] r=-1 lpr=88 pi=[53,88)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 166.708480835s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 88 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=88 pruub=9.820796967s) [1] r=-1 lpr=88 pi=[53,88)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 166.708480835s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 88 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=88 pruub=9.820796967s) [1] r=-1 lpr=88 pi=[53,88)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 166.708480835s@ mbc={}] enter Started/Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 88 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=85/86 n=5 ec=45/34 lis/c=85/53 les/c/f=86/54/0 sis=87) [2] r=-1 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000618 1 0.000047
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 88 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=85/86 n=5 ec=45/34 lis/c=85/53 les/c/f=86/54/0 sis=87) [2] r=-1 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 88 handle_osd_map epochs [87,88], i have 88, src has [1,88]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 88 pg[9.13( v 44'389 (0'0,44'389] lb MIN local-lis/les=85/86 n=5 ec=45/34 lis/c=85/53 les/c/f=86/54/0 sis=87) [2] r=-1 lpr=87 DELETING pi=[53,87)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.045612 3 0.000186
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 88 pg[9.13( v 44'389 (0'0,44'389] lb MIN local-lis/les=85/86 n=5 ec=45/34 lis/c=85/53 les/c/f=86/54/0 sis=87) [2] r=-1 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.046268 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 88 pg[9.13( v 44'389 (0'0,44'389] lb MIN local-lis/les=85/86 n=5 ec=45/34 lis/c=85/53 les/c/f=86/54/0 sis=87) [2] r=-1 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.458889 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:28.025514+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 88 heartbeat osd_stat(store_statfs(0x4fcf23000/0x0/0x4ffc00000, data 0x7f025/0x109000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 65708032 unmapped: 1343488 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 88 handle_osd_map epochs [89,89], i have 88, src has [1,89]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 89 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=88) [1] r=-1 lpr=88 pi=[53,88)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.604810 3 0.000047
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 89 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=88) [1] r=-1 lpr=88 pi=[53,88)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.604842 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 89 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=88) [1] r=-1 lpr=88 pi=[53,88)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 89 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=89) [1]/[0] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 89 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=89) [1]/[0] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 remapped mbc={}] exit Reset 0.000074 1 0.000100
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 89 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=89) [1]/[0] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 89 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=89) [1]/[0] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 89 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=89) [1]/[0] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 89 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=89) [1]/[0] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 89 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=89) [1]/[0] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 89 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=89) [1]/[0] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 89 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=89) [1]/[0] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 89 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=89) [1]/[0] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000024 1 0.000055
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 89 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=89) [1]/[0] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 89 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=89) [1]/[0] async=[1] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000020 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 89 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=89) [1]/[0] async=[1] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 89 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=89) [1]/[0] async=[1] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 89 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=89) [1]/[0] async=[1] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:29.025673+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 65732608 unmapped: 1318912 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 89 handle_osd_map epochs [89,90], i have 89, src has [1,90]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 89 handle_osd_map epochs [89,90], i have 90, src has [1,90]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 90 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=89) [1]/[0] async=[1] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.001949 4 0.000045
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 90 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=89) [1]/[0] async=[1] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.002532 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 90 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=89) [1]/[0] async=[1] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 90 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=89/90 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=89) [1]/[0] async=[1] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 activating+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:30.025857+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 65732608 unmapped: 1318912 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 90 handle_osd_map epochs [90,90], i have 90, src has [1,90]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 90 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=89/90 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=89) [1]/[0] async=[1] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 90 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=89/90 n=5 ec=45/34 lis/c=89/53 les/c/f=90/54/0 sis=89) [1]/[0] async=[1] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 1.001480 5 0.001211
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 90 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=89/90 n=5 ec=45/34 lis/c=89/53 les/c/f=90/54/0 sis=89) [1]/[0] async=[1] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 90 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=89/90 n=5 ec=45/34 lis/c=89/53 les/c/f=90/54/0 sis=89) [1]/[0] async=[1] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000102 1 0.000093
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 90 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=89/90 n=5 ec=45/34 lis/c=89/53 les/c/f=90/54/0 sis=89) [1]/[0] async=[1] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 90 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=89/90 n=5 ec=45/34 lis/c=89/53 les/c/f=90/54/0 sis=89) [1]/[0] async=[1] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000242 1 0.000044
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 90 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=89/90 n=5 ec=45/34 lis/c=89/53 les/c/f=90/54/0 sis=89) [1]/[0] async=[1] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 90 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=89/90 n=5 ec=45/34 lis/c=89/53 les/c/f=90/54/0 sis=89) [1]/[0] async=[1] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.036171 2 0.000035
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 90 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=89/90 n=5 ec=45/34 lis/c=89/53 les/c/f=90/54/0 sis=89) [1]/[0] async=[1] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:31.025988+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 65748992 unmapped: 1302528 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 696116 data_alloc: 218103808 data_used: 151552
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 90 handle_osd_map epochs [90,91], i have 90, src has [1,91]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 90 handle_osd_map epochs [91,91], i have 91, src has [1,91]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 91 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=89/90 n=5 ec=45/34 lis/c=89/53 les/c/f=90/54/0 sis=89) [1]/[0] async=[1] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.966247 1 0.000068
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 91 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=89/90 n=5 ec=45/34 lis/c=89/53 les/c/f=90/54/0 sis=89) [1]/[0] async=[1] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active 2.004454 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 91 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=89/90 n=5 ec=45/34 lis/c=89/53 les/c/f=90/54/0 sis=89) [1]/[0] async=[1] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary 3.007423 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 91 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=89/90 n=5 ec=45/34 lis/c=89/53 les/c/f=90/54/0 sis=89) [1]/[0] async=[1] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started 3.007459 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 91 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=89/90 n=5 ec=45/34 lis/c=89/53 les/c/f=90/54/0 sis=89) [1]/[0] async=[1] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 91 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=89/90 n=5 ec=45/34 lis/c=89/53 les/c/f=90/54/0 sis=91 pruub=14.996270180s) [1] async=[1] r=-1 lpr=91 pi=[53,91)/1 crt=44'389 mlcod 44'389 active pruub 175.496398926s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 91 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=89/90 n=5 ec=45/34 lis/c=89/53 les/c/f=90/54/0 sis=91 pruub=14.995950699s) [1] r=-1 lpr=91 pi=[53,91)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 175.496398926s@ mbc={}] exit Reset 0.000363 1 0.000639
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 91 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=89/90 n=5 ec=45/34 lis/c=89/53 les/c/f=90/54/0 sis=91 pruub=14.995950699s) [1] r=-1 lpr=91 pi=[53,91)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 175.496398926s@ mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 91 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=89/90 n=5 ec=45/34 lis/c=89/53 les/c/f=90/54/0 sis=91 pruub=14.995950699s) [1] r=-1 lpr=91 pi=[53,91)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 175.496398926s@ mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 91 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=89/90 n=5 ec=45/34 lis/c=89/53 les/c/f=90/54/0 sis=91 pruub=14.995950699s) [1] r=-1 lpr=91 pi=[53,91)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 175.496398926s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 91 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=89/90 n=5 ec=45/34 lis/c=89/53 les/c/f=90/54/0 sis=91 pruub=14.995950699s) [1] r=-1 lpr=91 pi=[53,91)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 175.496398926s@ mbc={}] exit Start 0.000093 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 91 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=89/90 n=5 ec=45/34 lis/c=89/53 les/c/f=90/54/0 sis=91 pruub=14.995950699s) [1] r=-1 lpr=91 pi=[53,91)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 175.496398926s@ mbc={}] enter Started/Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 91 handle_osd_map epochs [91,91], i have 91, src has [1,91]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:32.026127+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 91 pg[9.16(unlocked)] enter Initial
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 91 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=90) [0] r=0 lpr=0 pi=[66,90)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000044 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 91 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=90) [0] r=0 lpr=0 pi=[66,90)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 91 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=90) [0] r=0 lpr=91 pi=[66,90)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000012 1 0.000028
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 91 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=90) [0] r=0 lpr=91 pi=[66,90)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 91 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=90) [0] r=0 lpr=91 pi=[66,90)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 91 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=90) [0] r=0 lpr=91 pi=[66,90)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 91 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=90) [0] r=0 lpr=91 pi=[66,90)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000010 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 91 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=90) [0] r=0 lpr=91 pi=[66,90)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 91 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=90) [0] r=0 lpr=91 pi=[66,90)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 91 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=90) [0] r=0 lpr=91 pi=[66,90)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 91 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=90) [0] r=0 lpr=91 pi=[66,90)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000117 1 0.000042
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 91 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=90) [0] r=0 lpr=91 pi=[66,90)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 91 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=90) [0] r=0 lpr=91 pi=[66,90)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000030 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 91 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=90) [0] r=0 lpr=91 pi=[66,90)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000164 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 91 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=90) [0] r=0 lpr=91 pi=[66,90)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 65757184 unmapped: 1294336 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 91 handle_osd_map epochs [92,92], i have 91, src has [1,92]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 91 handle_osd_map epochs [92,92], i have 92, src has [1,92]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 92 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=90) [0] r=0 lpr=91 pi=[66,90)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.170088 2 0.000058
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 92 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=90) [0] r=0 lpr=91 pi=[66,90)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.171309 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 92 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=90) [0] r=0 lpr=91 pi=[66,90)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.171346 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 92 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=90) [0] r=0 lpr=91 pi=[66,90)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 92 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=92) [0]/[2] r=-1 lpr=92 pi=[66,92)/2 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 92 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=92) [0]/[2] r=-1 lpr=92 pi=[66,92)/2 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000312 1 0.001396
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 92 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=92) [0]/[2] r=-1 lpr=92 pi=[66,92)/2 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 92 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=92) [0]/[2] r=-1 lpr=92 pi=[66,92)/2 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 92 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=92) [0]/[2] r=-1 lpr=92 pi=[66,92)/2 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 92 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=92) [0]/[2] r=-1 lpr=92 pi=[66,92)/2 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000083 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 92 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=92) [0]/[2] r=-1 lpr=92 pi=[66,92)/2 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 92 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=89/90 n=5 ec=45/34 lis/c=89/53 les/c/f=90/54/0 sis=91) [1] r=-1 lpr=91 pi=[53,91)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.018708 7 0.000246
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 92 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=89/90 n=5 ec=45/34 lis/c=89/53 les/c/f=90/54/0 sis=91) [1] r=-1 lpr=91 pi=[53,91)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 92 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=89/90 n=5 ec=45/34 lis/c=89/53 les/c/f=90/54/0 sis=91) [1] r=-1 lpr=91 pi=[53,91)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 92 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=89/90 n=5 ec=45/34 lis/c=89/53 les/c/f=90/54/0 sis=91) [1] r=-1 lpr=91 pi=[53,91)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000067 1 0.000091
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 92 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=89/90 n=5 ec=45/34 lis/c=89/53 les/c/f=90/54/0 sis=91) [1] r=-1 lpr=91 pi=[53,91)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 92 pg[9.15( v 44'389 (0'0,44'389] lb MIN local-lis/les=89/90 n=5 ec=45/34 lis/c=89/53 les/c/f=90/54/0 sis=91) [1] r=-1 lpr=91 DELETING pi=[53,91)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.032759 2 0.000131
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 92 pg[9.15( v 44'389 (0'0,44'389] lb MIN local-lis/les=89/90 n=5 ec=45/34 lis/c=89/53 les/c/f=90/54/0 sis=91) [1] r=-1 lpr=91 pi=[53,91)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.032878 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 92 pg[9.15( v 44'389 (0'0,44'389] lb MIN local-lis/les=89/90 n=5 ec=45/34 lis/c=89/53 les/c/f=90/54/0 sis=91) [1] r=-1 lpr=91 pi=[53,91)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.051770 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 92 heartbeat osd_stat(store_statfs(0x4fcf19000/0x0/0x4ffc00000, data 0x85cd9/0x114000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:33.026237+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 93 sent 91 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:42:02.892623+0000 osd.0 (osd.0) 92 : cluster [DBG] 11.14 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:42:02.906656+0000 osd.0 (osd.0) 93 : cluster [DBG] 11.14 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 92 heartbeat osd_stat(store_statfs(0x4fcf16000/0x0/0x4ffc00000, data 0x876d8/0x116000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 65970176 unmapped: 1081344 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 93) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:42:02.892623+0000 osd.0 (osd.0) 92 : cluster [DBG] 11.14 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:42:02.906656+0000 osd.0 (osd.0) 93 : cluster [DBG] 11.14 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 92 heartbeat osd_stat(store_statfs(0x4fcf16000/0x0/0x4ffc00000, data 0x876d8/0x116000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 92 handle_osd_map epochs [93,93], i have 92, src has [1,93]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 93 pg[9.16( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=92) [0]/[2] r=-1 lpr=92 pi=[66,92)/2 crt=44'389 mlcod 0'0 remapped NOTIFY m=4 mbc={}] exit Started/Stray 1.313217 5 0.000173
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 93 pg[9.16( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=92) [0]/[2] r=-1 lpr=92 pi=[66,92)/2 crt=44'389 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 93 pg[9.16( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=92) [0]/[2] r=-1 lpr=92 pi=[66,92)/2 crt=44'389 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: not registered w/ OSD
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 93 pg[9.16( v 44'389 lc 39'66 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=92/66 les/c/f=93/67/0 sis=92) [0]/[2] r=-1 lpr=92 pi=[66,92)/2 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.002258 4 0.000193
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 93 pg[9.16( v 44'389 lc 39'66 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=92/66 les/c/f=93/67/0 sis=92) [0]/[2] r=-1 lpr=92 pi=[66,92)/2 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 93 pg[9.16( v 44'389 lc 39'66 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=92/66 les/c/f=93/67/0 sis=92) [0]/[2] r=-1 lpr=92 pi=[66,92)/2 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000137 1 0.000054
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 93 pg[9.16( v 44'389 lc 39'66 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=92/66 les/c/f=93/67/0 sis=92) [0]/[2] r=-1 lpr=92 pi=[66,92)/2 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 93 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=92/66 les/c/f=93/67/0 sis=92) [0]/[2] r=-1 lpr=92 pi=[66,92)/2 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.028565 1 0.000093
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 93 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=92/66 les/c/f=93/67/0 sis=92) [0]/[2] r=-1 lpr=92 pi=[66,92)/2 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:34.026376+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 65970176 unmapped: 1081344 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 93 handle_osd_map epochs [93,94], i have 93, src has [1,94]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.991465569s of 10.062144279s, submitted: 82
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=92/66 les/c/f=93/67/0 sis=92) [0]/[2] r=-1 lpr=92 pi=[66,92)/2 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.665963 1 0.000031
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=92/66 les/c/f=93/67/0 sis=92) [0]/[2] r=-1 lpr=92 pi=[66,92)/2 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.697079 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=92/66 les/c/f=93/67/0 sis=92) [0]/[2] r=-1 lpr=92 pi=[66,92)/2 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 2.010449 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=92/66 les/c/f=93/67/0 sis=92) [0]/[2] r=-1 lpr=92 pi=[66,92)/2 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=92/66 les/c/f=93/67/0 sis=94) [0] r=0 lpr=94 pi=[66,94)/2 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=92/66 les/c/f=93/67/0 sis=94) [0] r=0 lpr=94 pi=[66,94)/2 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000322 1 0.000386
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=92/66 les/c/f=93/67/0 sis=94) [0] r=0 lpr=94 pi=[66,94)/2 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=92/66 les/c/f=93/67/0 sis=94) [0] r=0 lpr=94 pi=[66,94)/2 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=92/66 les/c/f=93/67/0 sis=94) [0] r=0 lpr=94 pi=[66,94)/2 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=92/66 les/c/f=93/67/0 sis=94) [0] r=0 lpr=94 pi=[66,94)/2 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000093 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=92/66 les/c/f=93/67/0 sis=94) [0] r=0 lpr=94 pi=[66,94)/2 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=92/66 les/c/f=93/67/0 sis=94) [0] r=0 lpr=94 pi=[66,94)/2 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=92/66 les/c/f=93/67/0 sis=94) [0] r=0 lpr=94 pi=[66,94)/2 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=92/66 les/c/f=93/67/0 sis=94) [0] r=0 lpr=94 pi=[66,94)/2 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000875 2 0.000197
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=92/66 les/c/f=93/67/0 sis=94) [0] r=0 lpr=94 pi=[66,94)/2 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 94 handle_osd_map epochs [94,94], i have 94, src has [1,94]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=9
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=9
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=92/93 n=5 ec=45/34 lis/c=92/66 les/c/f=93/67/0 sis=94) [0] r=0 lpr=94 pi=[66,94)/2 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001013 2 0.000056
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=92/93 n=5 ec=45/34 lis/c=92/66 les/c/f=93/67/0 sis=94) [0] r=0 lpr=94 pi=[66,94)/2 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=92/93 n=5 ec=45/34 lis/c=92/66 les/c/f=93/67/0 sis=94) [0] r=0 lpr=94 pi=[66,94)/2 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000014 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=92/93 n=5 ec=45/34 lis/c=92/66 les/c/f=93/67/0 sis=94) [0] r=0 lpr=94 pi=[66,94)/2 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 94 heartbeat osd_stat(store_statfs(0x4fcf10000/0x0/0x4ffc00000, data 0x8abda/0x11d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:35.026498+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 65994752 unmapped: 1056768 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 94 handle_osd_map epochs [94,95], i have 94, src has [1,95]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 95 handle_osd_map epochs [94,95], i have 95, src has [1,95]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 95 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=92/93 n=5 ec=45/34 lis/c=92/66 les/c/f=93/67/0 sis=94) [0] r=0 lpr=94 pi=[66,94)/2 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.998600 2 0.000122
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 95 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=92/93 n=5 ec=45/34 lis/c=92/66 les/c/f=93/67/0 sis=94) [0] r=0 lpr=94 pi=[66,94)/2 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.000579 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 95 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=92/93 n=5 ec=45/34 lis/c=92/66 les/c/f=93/67/0 sis=94) [0] r=0 lpr=94 pi=[66,94)/2 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 95 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=92/66 les/c/f=93/67/0 sis=94) [0] r=0 lpr=94 pi=[66,94)/2 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 95 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=92/66 les/c/f=93/67/0 sis=94) [0] r=0 lpr=94 pi=[66,94)/2 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 95 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/66 les/c/f=95/67/0 sis=94) [0] r=0 lpr=94 pi=[66,94)/2 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002562 3 0.000095
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 95 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/66 les/c/f=95/67/0 sis=94) [0] r=0 lpr=94 pi=[66,94)/2 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 95 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/66 les/c/f=95/67/0 sis=94) [0] r=0 lpr=94 pi=[66,94)/2 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 95 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/66 les/c/f=95/67/0 sis=94) [0] r=0 lpr=94 pi=[66,94)/2 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:36.026668+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 1024000 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 712133 data_alloc: 218103808 data_used: 151552
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 95 heartbeat osd_stat(store_statfs(0x4fcf0c000/0x0/0x4ffc00000, data 0x8c60d/0x120000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:37.026806+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 1024000 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:38.027018+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 95 handle_osd_map epochs [96,96], i have 95, src has [1,96]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 66052096 unmapped: 999424 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 96 heartbeat osd_stat(store_statfs(0x4fcf0a000/0x0/0x4ffc00000, data 0x8e18a/0x123000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:39.027138+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 66060288 unmapped: 991232 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:40.027264+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 66060288 unmapped: 991232 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:41.027426+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 66093056 unmapped: 958464 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 714433 data_alloc: 218103808 data_used: 151552
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 96 handle_osd_map epochs [97,98], i have 96, src has [1,98]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 97 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=44'389 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 67.901244 129 0.000245
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 97 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active 67.904208 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 97 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary 68.904801 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 97 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=44'389 mlcod 0'0 active mbc={}] exit Started 68.904818 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 97 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=44'389 mlcod 0'0 active mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 97 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=97 pruub=12.099705696s) [2] r=-1 lpr=97 pi=[53,97)/1 crt=44'389 mlcod 0'0 active pruub 182.709518433s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 98 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=97 pruub=12.099660873s) [2] r=-1 lpr=97 pi=[53,97)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 182.709518433s@ mbc={}] exit Reset 0.000066 2 0.000098
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 98 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=97 pruub=12.099660873s) [2] r=-1 lpr=97 pi=[53,97)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 182.709518433s@ mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 98 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=97 pruub=12.099660873s) [2] r=-1 lpr=97 pi=[53,97)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 182.709518433s@ mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 98 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=97 pruub=12.099660873s) [2] r=-1 lpr=97 pi=[53,97)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 182.709518433s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 98 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=97 pruub=12.099660873s) [2] r=-1 lpr=97 pi=[53,97)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 182.709518433s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 98 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=97 pruub=12.099660873s) [2] r=-1 lpr=97 pi=[53,97)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 182.709518433s@ mbc={}] enter Started/Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 98 handle_osd_map epochs [97,98], i have 98, src has [1,98]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:42.027558+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 95 sent 93 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:42:11.838394+0000 osd.0 (osd.0) 94 : cluster [DBG] 7.18 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:42:11.852592+0000 osd.0 (osd.0) 95 : cluster [DBG] 7.18 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 66117632 unmapped: 933888 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 98 heartbeat osd_stat(store_statfs(0x4fcf04000/0x0/0x4ffc00000, data 0x91884/0x129000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 95) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:42:11.838394+0000 osd.0 (osd.0) 94 : cluster [DBG] 7.18 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:42:11.852592+0000 osd.0 (osd.0) 95 : cluster [DBG] 7.18 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 98 handle_osd_map epochs [98,99], i have 98, src has [1,99]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 99 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=97) [2] r=-1 lpr=97 pi=[53,97)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.945202 3 0.000030
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 99 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=97) [2] r=-1 lpr=97 pi=[53,97)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.945241 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 99 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=97) [2] r=-1 lpr=97 pi=[53,97)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 99 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=99) [2]/[0] r=0 lpr=99 pi=[53,99)/1 crt=44'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 99 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=99) [2]/[0] r=0 lpr=99 pi=[53,99)/1 crt=44'389 mlcod 0'0 remapped mbc={}] exit Reset 0.000173 1 0.000215
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 99 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=99) [2]/[0] r=0 lpr=99 pi=[53,99)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 99 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=99) [2]/[0] r=0 lpr=99 pi=[53,99)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 99 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=99) [2]/[0] r=0 lpr=99 pi=[53,99)/1 crt=44'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 99 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=99) [2]/[0] r=0 lpr=99 pi=[53,99)/1 crt=44'389 mlcod 0'0 remapped mbc={}] exit Start 0.000080 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 99 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=99) [2]/[0] r=0 lpr=99 pi=[53,99)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 99 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=99) [2]/[0] r=0 lpr=99 pi=[53,99)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 99 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=99) [2]/[0] r=0 lpr=99 pi=[53,99)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 99 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=99) [2]/[0] r=0 lpr=99 pi=[53,99)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000820 2 0.000178
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 99 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=99) [2]/[0] r=0 lpr=99 pi=[53,99)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 99 handle_osd_map epochs [99,99], i have 99, src has [1,99]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 99 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=99) [2]/[0] async=[2] r=0 lpr=99 pi=[53,99)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000022 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 99 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=99) [2]/[0] async=[2] r=0 lpr=99 pi=[53,99)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 99 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=99) [2]/[0] async=[2] r=0 lpr=99 pi=[53,99)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 99 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=99) [2]/[0] async=[2] r=0 lpr=99 pi=[53,99)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:43.027725+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 66125824 unmapped: 925696 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 99 handle_osd_map epochs [99,100], i have 99, src has [1,100]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 100 handle_osd_map epochs [99,100], i have 100, src has [1,100]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 100 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=99) [2]/[0] async=[2] r=0 lpr=99 pi=[53,99)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.000192 3 0.000064
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 100 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=99) [2]/[0] async=[2] r=0 lpr=99 pi=[53,99)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.001152 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 100 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=99) [2]/[0] async=[2] r=0 lpr=99 pi=[53,99)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 100 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=99/100 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=99) [2]/[0] async=[2] r=0 lpr=99 pi=[53,99)/1 crt=44'389 mlcod 0'0 activating+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 100 handle_osd_map epochs [100,100], i have 100, src has [1,100]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 100 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=99/100 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=99) [2]/[0] async=[2] r=0 lpr=99 pi=[53,99)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 100 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=99/100 n=5 ec=45/34 lis/c=99/53 les/c/f=100/54/0 sis=99) [2]/[0] async=[2] r=0 lpr=99 pi=[53,99)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/Activating 0.235259 5 0.000589
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 100 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=99/100 n=5 ec=45/34 lis/c=99/53 les/c/f=100/54/0 sis=99) [2]/[0] async=[2] r=0 lpr=99 pi=[53,99)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 100 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=99/100 n=5 ec=45/34 lis/c=99/53 les/c/f=100/54/0 sis=99) [2]/[0] async=[2] r=0 lpr=99 pi=[53,99)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000064 1 0.000063
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 100 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=99/100 n=5 ec=45/34 lis/c=99/53 les/c/f=100/54/0 sis=99) [2]/[0] async=[2] r=0 lpr=99 pi=[53,99)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 100 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=99/100 n=5 ec=45/34 lis/c=99/53 les/c/f=100/54/0 sis=99) [2]/[0] async=[2] r=0 lpr=99 pi=[53,99)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000486 1 0.000060
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 100 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=99/100 n=5 ec=45/34 lis/c=99/53 les/c/f=100/54/0 sis=99) [2]/[0] async=[2] r=0 lpr=99 pi=[53,99)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Recovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 100 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=99/100 n=5 ec=45/34 lis/c=99/53 les/c/f=100/54/0 sis=99) [2]/[0] async=[2] r=0 lpr=99 pi=[53,99)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.049514 2 0.000062
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 100 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=99/100 n=5 ec=45/34 lis/c=99/53 les/c/f=100/54/0 sis=99) [2]/[0] async=[2] r=0 lpr=99 pi=[53,99)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:44.027841+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: handle_auth_request added challenge on 0x563a3e85b400
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 66224128 unmapped: 827392 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 100 handle_osd_map epochs [101,101], i have 100, src has [1,101]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.021172523s of 10.047088623s, submitted: 35
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 100 handle_osd_map epochs [101,101], i have 101, src has [1,101]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 101 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=99/100 n=5 ec=45/34 lis/c=99/53 les/c/f=100/54/0 sis=99) [2]/[0] async=[2] r=0 lpr=99 pi=[53,99)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.725044 1 0.000098
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 101 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=99/100 n=5 ec=45/34 lis/c=99/53 les/c/f=100/54/0 sis=99) [2]/[0] async=[2] r=0 lpr=99 pi=[53,99)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active 1.010702 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 101 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=99/100 n=5 ec=45/34 lis/c=99/53 les/c/f=100/54/0 sis=99) [2]/[0] async=[2] r=0 lpr=99 pi=[53,99)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary 2.011912 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 101 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=99/100 n=5 ec=45/34 lis/c=99/53 les/c/f=100/54/0 sis=99) [2]/[0] async=[2] r=0 lpr=99 pi=[53,99)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started 2.012028 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 101 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=99/100 n=5 ec=45/34 lis/c=99/53 les/c/f=100/54/0 sis=99) [2]/[0] async=[2] r=0 lpr=99 pi=[53,99)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 101 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=99/100 n=5 ec=45/34 lis/c=99/53 les/c/f=100/54/0 sis=101 pruub=15.224581718s) [2] async=[2] r=-1 lpr=101 pi=[53,101)/1 crt=44'389 mlcod 44'389 active pruub 188.791946411s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 101 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=99/100 n=5 ec=45/34 lis/c=99/53 les/c/f=100/54/0 sis=101 pruub=15.224447250s) [2] r=-1 lpr=101 pi=[53,101)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 188.791946411s@ mbc={}] exit Reset 0.000155 1 0.000190
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 101 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=99/100 n=5 ec=45/34 lis/c=99/53 les/c/f=100/54/0 sis=101 pruub=15.224447250s) [2] r=-1 lpr=101 pi=[53,101)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 188.791946411s@ mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 101 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=99/100 n=5 ec=45/34 lis/c=99/53 les/c/f=100/54/0 sis=101 pruub=15.224447250s) [2] r=-1 lpr=101 pi=[53,101)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 188.791946411s@ mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 101 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=99/100 n=5 ec=45/34 lis/c=99/53 les/c/f=100/54/0 sis=101 pruub=15.224447250s) [2] r=-1 lpr=101 pi=[53,101)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 188.791946411s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 101 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=99/100 n=5 ec=45/34 lis/c=99/53 les/c/f=100/54/0 sis=101 pruub=15.224447250s) [2] r=-1 lpr=101 pi=[53,101)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 188.791946411s@ mbc={}] exit Start 0.000009 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 101 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=99/100 n=5 ec=45/34 lis/c=99/53 les/c/f=100/54/0 sis=101 pruub=15.224447250s) [2] r=-1 lpr=101 pi=[53,101)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 188.791946411s@ mbc={}] enter Started/Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:45.027959+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 802816 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 101 handle_osd_map epochs [101,102], i have 101, src has [1,102]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 102 handle_osd_map epochs [102,102], i have 102, src has [1,102]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 102 pg[9.1c(unlocked)] enter Initial
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 102 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=102) [0] r=0 lpr=0 pi=[75,102)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000036 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 102 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=102) [0] r=0 lpr=0 pi=[75,102)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 102 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=102) [0] r=0 lpr=102 pi=[75,102)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000018 1 0.000040
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 102 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=102) [0] r=0 lpr=102 pi=[75,102)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 102 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=102) [0] r=0 lpr=102 pi=[75,102)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 102 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=102) [0] r=0 lpr=102 pi=[75,102)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 102 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=102) [0] r=0 lpr=102 pi=[75,102)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000057 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 102 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=102) [0] r=0 lpr=102 pi=[75,102)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 102 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=102) [0] r=0 lpr=102 pi=[75,102)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 102 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=102) [0] r=0 lpr=102 pi=[75,102)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 102 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=102) [0] r=0 lpr=102 pi=[75,102)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000083 1 0.000153
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 102 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=102) [0] r=0 lpr=102 pi=[75,102)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 102 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=102) [0] r=0 lpr=102 pi=[75,102)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000032 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 102 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=102) [0] r=0 lpr=102 pi=[75,102)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000161 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 102 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=102) [0] r=0 lpr=102 pi=[75,102)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=99/100 n=5 ec=45/34 lis/c=99/53 les/c/f=100/54/0 sis=101) [2] r=-1 lpr=101 pi=[53,101)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.009384 7 0.000107
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=99/100 n=5 ec=45/34 lis/c=99/53 les/c/f=100/54/0 sis=101) [2] r=-1 lpr=101 pi=[53,101)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=99/100 n=5 ec=45/34 lis/c=99/53 les/c/f=100/54/0 sis=101) [2] r=-1 lpr=101 pi=[53,101)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=99/100 n=5 ec=45/34 lis/c=99/53 les/c/f=100/54/0 sis=101) [2] r=-1 lpr=101 pi=[53,101)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000061 1 0.000063
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=99/100 n=5 ec=45/34 lis/c=99/53 les/c/f=100/54/0 sis=101) [2] r=-1 lpr=101 pi=[53,101)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] lb MIN local-lis/les=99/100 n=5 ec=45/34 lis/c=99/53 les/c/f=100/54/0 sis=101) [2] r=-1 lpr=101 DELETING pi=[53,101)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.053069 2 0.000114
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] lb MIN local-lis/les=99/100 n=5 ec=45/34 lis/c=99/53 les/c/f=100/54/0 sis=101) [2] r=-1 lpr=101 pi=[53,101)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.053179 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] lb MIN local-lis/les=99/100 n=5 ec=45/34 lis/c=99/53 les/c/f=100/54/0 sis=101) [2] r=-1 lpr=101 pi=[53,101)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.062611 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:46.028055+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67305472 unmapped: 794624 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 724016 data_alloc: 218103808 data_used: 151552
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 102 handle_osd_map epochs [102,103], i have 102, src has [1,103]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 103 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=102) [0] r=0 lpr=102 pi=[75,102)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.957991 2 0.000097
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 103 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=102) [0] r=0 lpr=102 pi=[75,102)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.958209 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 103 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=102) [0] r=0 lpr=102 pi=[75,102)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.958313 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 103 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=102) [0] r=0 lpr=102 pi=[75,102)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 103 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=103) [0]/[2] r=-1 lpr=103 pi=[75,103)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 103 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=103) [0]/[2] r=-1 lpr=103 pi=[75,103)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000222 1 0.000283
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 103 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=103) [0]/[2] r=-1 lpr=103 pi=[75,103)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 103 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=103) [0]/[2] r=-1 lpr=103 pi=[75,103)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 103 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=103) [0]/[2] r=-1 lpr=103 pi=[75,103)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 103 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=103) [0]/[2] r=-1 lpr=103 pi=[75,103)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000095 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 103 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=103) [0]/[2] r=-1 lpr=103 pi=[75,103)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 103 handle_osd_map epochs [103,103], i have 103, src has [1,103]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:47.028195+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 103 heartbeat osd_stat(store_statfs(0x4fcef6000/0x0/0x4ffc00000, data 0x99e4b/0x137000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67362816 unmapped: 737280 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _renew_subs
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 103 handle_osd_map epochs [104,104], i have 103, src has [1,104]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 104 pg[9.1c( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=103) [0]/[2] r=-1 lpr=103 pi=[75,103)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=7 mbc={}] exit Started/Stray 1.262125 5 0.000188
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 104 pg[9.1c( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=103) [0]/[2] r=-1 lpr=103 pi=[75,103)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 104 pg[9.1c( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=75/75 les/c/f=76/76/0 sis=103) [0]/[2] r=-1 lpr=103 pi=[75,103)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 104 pg[9.1c( v 44'389 lc 39'125 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=103/75 les/c/f=104/76/0 sis=103) [0]/[2] r=-1 lpr=103 pi=[75,103)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.004425 4 0.000141
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 104 pg[9.1c( v 44'389 lc 39'125 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=103/75 les/c/f=104/76/0 sis=103) [0]/[2] r=-1 lpr=103 pi=[75,103)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 104 pg[9.1c( v 44'389 lc 39'125 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=103/75 les/c/f=104/76/0 sis=103) [0]/[2] r=-1 lpr=103 pi=[75,103)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000066 1 0.000074
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 104 pg[9.1c( v 44'389 lc 39'125 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=103/75 les/c/f=104/76/0 sis=103) [0]/[2] r=-1 lpr=103 pi=[75,103)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 104 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=103/75 les/c/f=104/76/0 sis=103) [0]/[2] r=-1 lpr=103 pi=[75,103)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.049730 1 0.000042
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 104 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=103/75 les/c/f=104/76/0 sis=103) [0]/[2] r=-1 lpr=103 pi=[75,103)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:48.028305+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67436544 unmapped: 663552 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 104 handle_osd_map epochs [105,105], i have 104, src has [1,105]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=103/75 les/c/f=104/76/0 sis=103) [0]/[2] r=-1 lpr=103 pi=[75,103)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.733588 1 0.000052
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=103/75 les/c/f=104/76/0 sis=103) [0]/[2] r=-1 lpr=103 pi=[75,103)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.787942 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=103/75 les/c/f=104/76/0 sis=103) [0]/[2] r=-1 lpr=103 pi=[75,103)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 2.050232 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=103/75 les/c/f=104/76/0 sis=103) [0]/[2] r=-1 lpr=103 pi=[75,103)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=103/75 les/c/f=104/76/0 sis=105) [0] r=0 lpr=105 pi=[75,105)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=103/75 les/c/f=104/76/0 sis=105) [0] r=0 lpr=105 pi=[75,105)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000114 1 0.000174
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=103/75 les/c/f=104/76/0 sis=105) [0] r=0 lpr=105 pi=[75,105)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=103/75 les/c/f=104/76/0 sis=105) [0] r=0 lpr=105 pi=[75,105)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=103/75 les/c/f=104/76/0 sis=105) [0] r=0 lpr=105 pi=[75,105)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=103/75 les/c/f=104/76/0 sis=105) [0] r=0 lpr=105 pi=[75,105)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000040 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=103/75 les/c/f=104/76/0 sis=105) [0] r=0 lpr=105 pi=[75,105)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=103/75 les/c/f=104/76/0 sis=105) [0] r=0 lpr=105 pi=[75,105)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=103/75 les/c/f=104/76/0 sis=105) [0] r=0 lpr=105 pi=[75,105)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=103/75 les/c/f=104/76/0 sis=105) [0] r=0 lpr=105 pi=[75,105)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000035 1 0.000133
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=103/75 les/c/f=104/76/0 sis=105) [0] r=0 lpr=105 pi=[75,105)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=15
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=15
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=103/104 n=5 ec=45/34 lis/c=103/75 les/c/f=104/76/0 sis=105) [0] r=0 lpr=105 pi=[75,105)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001056 3 0.000051
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=103/104 n=5 ec=45/34 lis/c=103/75 les/c/f=104/76/0 sis=105) [0] r=0 lpr=105 pi=[75,105)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=103/104 n=5 ec=45/34 lis/c=103/75 les/c/f=104/76/0 sis=105) [0] r=0 lpr=105 pi=[75,105)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000033 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=103/104 n=5 ec=45/34 lis/c=103/75 les/c/f=104/76/0 sis=105) [0] r=0 lpr=105 pi=[75,105)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 7.1f deep-scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 7.1f deep-scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:49.028438+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 97 sent 95 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:42:18.801466+0000 osd.0 (osd.0) 96 : cluster [DBG] 7.1f deep-scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:42:18.815559+0000 osd.0 (osd.0) 97 : cluster [DBG] 7.1f deep-scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67461120 unmapped: 638976 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 105 handle_osd_map epochs [105,106], i have 105, src has [1,106]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 97) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:42:18.801466+0000 osd.0 (osd.0) 96 : cluster [DBG] 7.1f deep-scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:42:18.815559+0000 osd.0 (osd.0) 97 : cluster [DBG] 7.1f deep-scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 106 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=103/104 n=5 ec=45/34 lis/c=103/75 les/c/f=104/76/0 sis=105) [0] r=0 lpr=105 pi=[75,105)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.003140 2 0.000081
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 106 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=103/104 n=5 ec=45/34 lis/c=103/75 les/c/f=104/76/0 sis=105) [0] r=0 lpr=105 pi=[75,105)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.004342 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 106 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=103/104 n=5 ec=45/34 lis/c=103/75 les/c/f=104/76/0 sis=105) [0] r=0 lpr=105 pi=[75,105)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 106 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=103/75 les/c/f=104/76/0 sis=105) [0] r=0 lpr=105 pi=[75,105)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 106 handle_osd_map epochs [105,106], i have 106, src has [1,106]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 106 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=103/75 les/c/f=104/76/0 sis=105) [0] r=0 lpr=105 pi=[75,105)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 106 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=105/75 les/c/f=106/76/0 sis=105) [0] r=0 lpr=105 pi=[75,105)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001986 3 0.000372
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 106 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=105/75 les/c/f=106/76/0 sis=105) [0] r=0 lpr=105 pi=[75,105)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 106 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=105/75 les/c/f=106/76/0 sis=105) [0] r=0 lpr=105 pi=[75,105)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 106 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=105/75 les/c/f=106/76/0 sis=105) [0] r=0 lpr=105 pi=[75,105)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 106 handle_osd_map epochs [106,106], i have 106, src has [1,106]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:50.028575+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 99 sent 97 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:42:19.793696+0000 osd.0 (osd.0) 98 : cluster [DBG] 8.10 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:42:19.810448+0000 osd.0 (osd.0) 99 : cluster [DBG] 8.10 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 614400 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 99) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:42:19.793696+0000 osd.0 (osd.0) 98 : cluster [DBG] 8.10 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:42:19.810448+0000 osd.0 (osd.0) 99 : cluster [DBG] 8.10 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:51.028685+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 614400 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 747611 data_alloc: 218103808 data_used: 151552
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 106 handle_osd_map epochs [106,107], i have 106, src has [1,107]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 11.f scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 11.f scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:52.028786+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 101 sent 99 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:42:21.793512+0000 osd.0 (osd.0) 100 : cluster [DBG] 11.f scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:42:21.807497+0000 osd.0 (osd.0) 101 : cluster [DBG] 11.f scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 107 pg[9.1e(unlocked)] enter Initial
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 107 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=107) [0] r=0 lpr=0 pi=[66,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000043 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 107 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=107) [0] r=0 lpr=0 pi=[66,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 107 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=107) [0] r=0 lpr=107 pi=[66,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000011 1 0.000019
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 107 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=107) [0] r=0 lpr=107 pi=[66,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 107 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=107) [0] r=0 lpr=107 pi=[66,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 107 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=107) [0] r=0 lpr=107 pi=[66,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 107 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=107) [0] r=0 lpr=107 pi=[66,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 107 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=107) [0] r=0 lpr=107 pi=[66,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 107 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=107) [0] r=0 lpr=107 pi=[66,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 107 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=107) [0] r=0 lpr=107 pi=[66,107)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 107 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=107) [0] r=0 lpr=107 pi=[66,107)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000102 1 0.000030
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 107 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=107) [0] r=0 lpr=107 pi=[66,107)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 107 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=107) [0] r=0 lpr=107 pi=[66,107)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000026 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 107 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=107) [0] r=0 lpr=107 pi=[66,107)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000140 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 107 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=107) [0] r=0 lpr=107 pi=[66,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 606208 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 101) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:42:21.793512+0000 osd.0 (osd.0) 100 : cluster [DBG] 11.f scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:42:21.807497+0000 osd.0 (osd.0) 101 : cluster [DBG] 11.f scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 107 heartbeat osd_stat(store_statfs(0x4fcad8000/0x0/0x4ffc00000, data 0xa0a4f/0x144000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:53.028901+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 107 handle_osd_map epochs [107,108], i have 107, src has [1,108]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 108 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=107) [0] r=0 lpr=107 pi=[66,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.003090 2 0.000045
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 108 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=107) [0] r=0 lpr=107 pi=[66,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.003370 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 108 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=107) [0] r=0 lpr=107 pi=[66,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.003433 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 108 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=107) [0] r=0 lpr=107 pi=[66,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 108 handle_osd_map epochs [108,108], i have 108, src has [1,108]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 108 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=108) [0]/[2] r=-1 lpr=108 pi=[66,108)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 108 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=108) [0]/[2] r=-1 lpr=108 pi=[66,108)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000726 1 0.000934
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 108 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=108) [0]/[2] r=-1 lpr=108 pi=[66,108)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 108 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=108) [0]/[2] r=-1 lpr=108 pi=[66,108)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 108 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=108) [0]/[2] r=-1 lpr=108 pi=[66,108)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 108 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=108) [0]/[2] r=-1 lpr=108 pi=[66,108)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000041 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 108 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=108) [0]/[2] r=-1 lpr=108 pi=[66,108)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67469312 unmapped: 630784 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:54.029000+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 108 handle_osd_map epochs [108,109], i have 108, src has [1,109]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 109 pg[9.1e( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=108) [0]/[2] r=-1 lpr=108 pi=[66,108)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.000450 6 0.000134
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 109 pg[9.1e( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=108) [0]/[2] r=-1 lpr=108 pi=[66,108)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 109 pg[9.1e( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=66/66 les/c/f=67/67/0 sis=108) [0]/[2] r=-1 lpr=108 pi=[66,108)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 109 pg[9.1e( v 44'389 lc 39'220 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=108/66 les/c/f=109/67/0 sis=108) [0]/[2] r=-1 lpr=108 pi=[66,108)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.002427 3 0.000128
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 109 pg[9.1e( v 44'389 lc 39'220 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=108/66 les/c/f=109/67/0 sis=108) [0]/[2] r=-1 lpr=108 pi=[66,108)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 109 pg[9.1e( v 44'389 lc 39'220 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=108/66 les/c/f=109/67/0 sis=108) [0]/[2] r=-1 lpr=108 pi=[66,108)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000094 1 0.000047
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 109 pg[9.1e( v 44'389 lc 39'220 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=108/66 les/c/f=109/67/0 sis=108) [0]/[2] r=-1 lpr=108 pi=[66,108)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 109 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=108/66 les/c/f=109/67/0 sis=108) [0]/[2] r=-1 lpr=108 pi=[66,108)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.035625 1 0.000031
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 109 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=108/66 les/c/f=109/67/0 sis=108) [0]/[2] r=-1 lpr=108 pi=[66,108)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 1613824 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:55.029239+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 109 handle_osd_map epochs [110,110], i have 109, src has [1,110]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.419281960s of 10.486537933s, submitted: 50
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=108/66 les/c/f=109/67/0 sis=108) [0]/[2] r=-1 lpr=108 pi=[66,108)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.970928 1 0.000060
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=108/66 les/c/f=109/67/0 sis=108) [0]/[2] r=-1 lpr=108 pi=[66,108)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.009188 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=108/66 les/c/f=109/67/0 sis=108) [0]/[2] r=-1 lpr=108 pi=[66,108)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 2.009735 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=108/66 les/c/f=109/67/0 sis=108) [0]/[2] r=-1 lpr=108 pi=[66,108)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=108/66 les/c/f=109/67/0 sis=110) [0] r=0 lpr=110 pi=[66,110)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=108/66 les/c/f=109/67/0 sis=110) [0] r=0 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000054 1 0.000088
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=108/66 les/c/f=109/67/0 sis=110) [0] r=0 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=108/66 les/c/f=109/67/0 sis=110) [0] r=0 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=108/66 les/c/f=109/67/0 sis=110) [0] r=0 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=108/66 les/c/f=109/67/0 sis=110) [0] r=0 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=108/66 les/c/f=109/67/0 sis=110) [0] r=0 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=108/66 les/c/f=109/67/0 sis=110) [0] r=0 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=108/66 les/c/f=109/67/0 sis=110) [0] r=0 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=108/66 les/c/f=109/67/0 sis=110) [0] r=0 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000024 1 0.000027
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=108/66 les/c/f=109/67/0 sis=110) [0] r=0 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: merge_log_dups log.dups.size()=0olog.dups.size()=10
Nov 26 11:59:09 compute-0 ceph-osd[88091]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=10
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=108/109 n=5 ec=45/34 lis/c=108/66 les/c/f=109/67/0 sis=110) [0] r=0 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001315 3 0.000421
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=108/109 n=5 ec=45/34 lis/c=108/66 les/c/f=109/67/0 sis=110) [0] r=0 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=108/109 n=5 ec=45/34 lis/c=108/66 les/c/f=109/67/0 sis=110) [0] r=0 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=108/109 n=5 ec=45/34 lis/c=108/66 les/c/f=109/67/0 sis=110) [0] r=0 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 1605632 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:56.029351+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 110 handle_osd_map epochs [110,111], i have 110, src has [1,111]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 110 handle_osd_map epochs [111,111], i have 111, src has [1,111]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 111 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=108/109 n=5 ec=45/34 lis/c=108/66 les/c/f=109/67/0 sis=110) [0] r=0 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.014539 2 0.000044
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 111 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=108/109 n=5 ec=45/34 lis/c=108/66 les/c/f=109/67/0 sis=110) [0] r=0 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.015920 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 111 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=108/109 n=5 ec=45/34 lis/c=108/66 les/c/f=109/67/0 sis=110) [0] r=0 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 111 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=108/66 les/c/f=109/67/0 sis=110) [0] r=0 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 111 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=108/66 les/c/f=109/67/0 sis=110) [0] r=0 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 111 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/66 les/c/f=111/67/0 sis=110) [0] r=0 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001075 4 0.000079
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 111 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/66 les/c/f=111/67/0 sis=110) [0] r=0 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 111 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/66 les/c/f=111/67/0 sis=110) [0] r=0 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000039 0 0.000000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 pg_epoch: 111 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/66 les/c/f=111/67/0 sis=110) [0] r=0 lpr=110 pi=[66,110)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67510272 unmapped: 1638400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 772573 data_alloc: 218103808 data_used: 151552
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:57.029488+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67510272 unmapped: 1638400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 111 handle_osd_map epochs [112,112], i have 111, src has [1,112]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:58.029601+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67518464 unmapped: 1630208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 112 handle_osd_map epochs [113,113], i have 112, src has [1,113]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac7000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:41:59.029677+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 103 sent 101 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:42:28.705845+0000 osd.0 (osd.0) 102 : cluster [DBG] 7.3 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:42:28.720039+0000 osd.0 (osd.0) 103 : cluster [DBG] 7.3 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 103) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:42:28.705845+0000 osd.0 (osd.0) 102 : cluster [DBG] 7.3 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:42:28.720039+0000 osd.0 (osd.0) 103 : cluster [DBG] 7.3 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67551232 unmapped: 1597440 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:00.029855+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 105 sent 103 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:42:29.693703+0000 osd.0 (osd.0) 104 : cluster [DBG] 11.1 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:42:29.707821+0000 osd.0 (osd.0) 105 : cluster [DBG] 11.1 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 105) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:42:29.693703+0000 osd.0 (osd.0) 104 : cluster [DBG] 11.1 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:42:29.707821+0000 osd.0 (osd.0) 105 : cluster [DBG] 11.1 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67551232 unmapped: 1597440 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:01.030014+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67567616 unmapped: 1581056 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 781136 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:02.030148+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67575808 unmapped: 1572864 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:03.030301+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67575808 unmapped: 1572864 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac7000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 8.c scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 8.c scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:04.030441+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 107 sent 105 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:42:33.744602+0000 osd.0 (osd.0) 106 : cluster [DBG] 8.c scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:42:33.758630+0000 osd.0 (osd.0) 107 : cluster [DBG] 8.c scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67584000 unmapped: 1564672 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 107) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:42:33.744602+0000 osd.0 (osd.0) 106 : cluster [DBG] 8.c scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:42:33.758630+0000 osd.0 (osd.0) 107 : cluster [DBG] 8.c scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:05.030611+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67592192 unmapped: 1556480 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 8.e scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.717268944s of 10.737275124s, submitted: 24
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 8.e scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:06.030764+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 109 sent 107 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:42:35.783778+0000 osd.0 (osd.0) 108 : cluster [DBG] 8.e scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:42:35.797781+0000 osd.0 (osd.0) 109 : cluster [DBG] 8.e scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67600384 unmapped: 1548288 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 782550 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 109) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:42:35.783778+0000 osd.0 (osd.0) 108 : cluster [DBG] 8.e scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:42:35.797781+0000 osd.0 (osd.0) 109 : cluster [DBG] 8.e scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:07.030950+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67600384 unmapped: 1548288 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:08.031116+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67624960 unmapped: 1523712 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:09.031259+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67624960 unmapped: 1523712 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:10.031361+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67624960 unmapped: 1523712 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:11.031456+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 1515520 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 782550 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:12.031566+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 1515520 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:13.031704+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 1515520 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:14.031799+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67641344 unmapped: 1507328 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:15.031957+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67641344 unmapped: 1507328 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 8.f scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.021595955s of 10.027535439s, submitted: 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 8.f scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:16.032052+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 111 sent 109 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:42:45.811234+0000 osd.0 (osd.0) 110 : cluster [DBG] 8.f scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:42:45.836017+0000 osd.0 (osd.0) 111 : cluster [DBG] 8.f scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 1482752 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783697 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 111) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:42:45.811234+0000 osd.0 (osd.0) 110 : cluster [DBG] 8.f scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:42:45.836017+0000 osd.0 (osd.0) 111 : cluster [DBG] 8.f scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:17.032184+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 1482752 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:18.032281+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 1482752 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:19.032384+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67674112 unmapped: 1474560 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:20.032518+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67674112 unmapped: 1474560 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:21.034761+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67674112 unmapped: 1474560 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783697 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:22.036286+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67682304 unmapped: 1466368 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:23.037485+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 113 sent 111 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:42:52.850137+0000 osd.0 (osd.0) 112 : cluster [DBG] 7.4 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:42:52.864252+0000 osd.0 (osd.0) 113 : cluster [DBG] 7.4 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67682304 unmapped: 1466368 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 113) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:42:52.850137+0000 osd.0 (osd.0) 112 : cluster [DBG] 7.4 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:42:52.864252+0000 osd.0 (osd.0) 113 : cluster [DBG] 7.4 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:24.037885+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 1458176 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:25.038128+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 1458176 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 8.b scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.010987282s of 10.015550613s, submitted: 4
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 8.b scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:26.038249+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 115 sent 113 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:42:55.826840+0000 osd.0 (osd.0) 114 : cluster [DBG] 8.b scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:42:55.840983+0000 osd.0 (osd.0) 115 : cluster [DBG] 8.b scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 1441792 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 785991 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 115) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:42:55.826840+0000 osd.0 (osd.0) 114 : cluster [DBG] 8.b scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:42:55.840983+0000 osd.0 (osd.0) 115 : cluster [DBG] 8.b scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:27.038526+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 117 sent 115 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:42:56.864599+0000 osd.0 (osd.0) 116 : cluster [DBG] 8.9 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:42:56.878743+0000 osd.0 (osd.0) 117 : cluster [DBG] 8.9 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 1441792 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 117) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:42:56.864599+0000 osd.0 (osd.0) 116 : cluster [DBG] 8.9 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:42:56.878743+0000 osd.0 (osd.0) 117 : cluster [DBG] 8.9 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 11.e scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 11.e scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:28.038841+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 119 sent 117 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:42:57.900316+0000 osd.0 (osd.0) 118 : cluster [DBG] 11.e scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:42:57.914459+0000 osd.0 (osd.0) 119 : cluster [DBG] 11.e scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67723264 unmapped: 1425408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 119) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:42:57.900316+0000 osd.0 (osd.0) 118 : cluster [DBG] 11.e scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:42:57.914459+0000 osd.0 (osd.0) 119 : cluster [DBG] 11.e scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 7.f scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 7.f scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:29.039068+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 121 sent 119 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:42:58.931852+0000 osd.0 (osd.0) 120 : cluster [DBG] 7.f scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:42:58.945983+0000 osd.0 (osd.0) 121 : cluster [DBG] 7.f scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67723264 unmapped: 1425408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 121) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:42:58.931852+0000 osd.0 (osd.0) 120 : cluster [DBG] 7.f scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:42:58.945983+0000 osd.0 (osd.0) 121 : cluster [DBG] 7.f scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:30.039223+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 123 sent 121 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:42:59.973198+0000 osd.0 (osd.0) 122 : cluster [DBG] 8.6 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:42:59.987309+0000 osd.0 (osd.0) 123 : cluster [DBG] 8.6 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67731456 unmapped: 1417216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 123) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:42:59.973198+0000 osd.0 (osd.0) 122 : cluster [DBG] 8.6 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:42:59.987309+0000 osd.0 (osd.0) 123 : cluster [DBG] 8.6 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:31.039391+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67731456 unmapped: 1417216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 790580 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:32.039518+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67739648 unmapped: 1409024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:33.039627+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67739648 unmapped: 1409024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:34.039794+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67747840 unmapped: 1400832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 8.18 deep-scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 8.18 deep-scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:35.039948+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 125 sent 123 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:43:05.016389+0000 osd.0 (osd.0) 124 : cluster [DBG] 8.18 deep-scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:43:05.030488+0000 osd.0 (osd.0) 125 : cluster [DBG] 8.18 deep-scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67747840 unmapped: 1400832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 125) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:43:05.016389+0000 osd.0 (osd.0) 124 : cluster [DBG] 8.18 deep-scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:43:05.030488+0000 osd.0 (osd.0) 125 : cluster [DBG] 8.18 deep-scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:36.040139+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67747840 unmapped: 1400832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 791728 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.169614792s of 11.181912422s, submitted: 12
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:37.040281+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 127 sent 125 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:43:07.008783+0000 osd.0 (osd.0) 126 : cluster [DBG] 11.4 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:43:07.022988+0000 osd.0 (osd.0) 127 : cluster [DBG] 11.4 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67756032 unmapped: 1392640 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 127) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:43:07.008783+0000 osd.0 (osd.0) 126 : cluster [DBG] 11.4 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:43:07.022988+0000 osd.0 (osd.0) 127 : cluster [DBG] 11.4 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:38.040461+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 129 sent 127 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:43:07.997853+0000 osd.0 (osd.0) 128 : cluster [DBG] 8.1d scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:43:08.011960+0000 osd.0 (osd.0) 129 : cluster [DBG] 8.1d scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67756032 unmapped: 1392640 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 129) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:43:07.997853+0000 osd.0 (osd.0) 128 : cluster [DBG] 8.1d scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:43:08.011960+0000 osd.0 (osd.0) 129 : cluster [DBG] 8.1d scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:39.040609+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67764224 unmapped: 1384448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:40.040762+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67764224 unmapped: 1384448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:41.040873+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 131 sent 129 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:43:10.985191+0000 osd.0 (osd.0) 130 : cluster [DBG] 7.13 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:43:10.999376+0000 osd.0 (osd.0) 131 : cluster [DBG] 7.13 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67796992 unmapped: 1351680 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 795172 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 131) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:43:10.985191+0000 osd.0 (osd.0) 130 : cluster [DBG] 7.13 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:43:10.999376+0000 osd.0 (osd.0) 131 : cluster [DBG] 7.13 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:42.041071+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 133 sent 131 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:43:12.001533+0000 osd.0 (osd.0) 132 : cluster [DBG] 7.9 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:43:12.015671+0000 osd.0 (osd.0) 133 : cluster [DBG] 7.9 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67813376 unmapped: 1335296 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 133) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:43:12.001533+0000 osd.0 (osd.0) 132 : cluster [DBG] 7.9 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:43:12.015671+0000 osd.0 (osd.0) 133 : cluster [DBG] 7.9 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:43.041243+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 135 sent 133 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:43:12.971812+0000 osd.0 (osd.0) 134 : cluster [DBG] 11.6 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:43:12.985798+0000 osd.0 (osd.0) 135 : cluster [DBG] 11.6 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67813376 unmapped: 1335296 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 135) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:43:12.971812+0000 osd.0 (osd.0) 134 : cluster [DBG] 11.6 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:43:12.985798+0000 osd.0 (osd.0) 135 : cluster [DBG] 11.6 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:44.041466+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67813376 unmapped: 1335296 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:45.041603+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67821568 unmapped: 1327104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 8.1f scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 8.1f scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:46.041669+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 137 sent 135 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:43:15.999336+0000 osd.0 (osd.0) 136 : cluster [DBG] 8.1f scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:43:16.013454+0000 osd.0 (osd.0) 137 : cluster [DBG] 8.1f scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67829760 unmapped: 1318912 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 798615 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 137) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:43:15.999336+0000 osd.0 (osd.0) 136 : cluster [DBG] 8.1f scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:43:16.013454+0000 osd.0 (osd.0) 137 : cluster [DBG] 8.1f scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:47.041852+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67837952 unmapped: 1310720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:48.042004+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67837952 unmapped: 1310720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:49.042143+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67837952 unmapped: 1310720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:50.042299+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67846144 unmapped: 1302528 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:51.042434+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67846144 unmapped: 1302528 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 798615 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:52.042565+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67854336 unmapped: 1294336 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:53.042713+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67854336 unmapped: 1294336 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.847852707s of 16.863052368s, submitted: 12
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:54.042828+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 139 sent 137 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:43:23.872132+0000 osd.0 (osd.0) 138 : cluster [DBG] 11.19 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:43:23.885986+0000 osd.0 (osd.0) 139 : cluster [DBG] 11.19 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67862528 unmapped: 1286144 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 139) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:43:23.872132+0000 osd.0 (osd.0) 138 : cluster [DBG] 11.19 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:43:23.885986+0000 osd.0 (osd.0) 139 : cluster [DBG] 11.19 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:55.043091+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67870720 unmapped: 1277952 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:56.043244+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67870720 unmapped: 1277952 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 799764 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:57.043380+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67870720 unmapped: 1277952 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:58.043526+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67887104 unmapped: 1261568 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:42:59.043699+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 141 sent 139 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:43:28.875509+0000 osd.0 (osd.0) 140 : cluster [DBG] 8.1a scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:43:28.889245+0000 osd.0 (osd.0) 141 : cluster [DBG] 8.1a scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67887104 unmapped: 1261568 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 141) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:43:28.875509+0000 osd.0 (osd.0) 140 : cluster [DBG] 8.1a scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:43:28.889245+0000 osd.0 (osd.0) 141 : cluster [DBG] 8.1a scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:00.044071+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67895296 unmapped: 1253376 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:01.044204+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67911680 unmapped: 1236992 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 800912 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:02.044345+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67919872 unmapped: 1228800 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:03.044481+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 143 sent 141 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:43:32.839293+0000 osd.0 (osd.0) 142 : cluster [DBG] 7.6 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:43:32.853404+0000 osd.0 (osd.0) 143 : cluster [DBG] 7.6 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67919872 unmapped: 1228800 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 143) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:43:32.839293+0000 osd.0 (osd.0) 142 : cluster [DBG] 7.6 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:43:32.853404+0000 osd.0 (osd.0) 143 : cluster [DBG] 7.6 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:04.044667+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 1212416 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:05.044806+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67944448 unmapped: 1204224 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:06.044931+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67944448 unmapped: 1204224 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 802059 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.984128952s of 12.991313934s, submitted: 6
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:07.045093+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 145 sent 143 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:43:36.863203+0000 osd.0 (osd.0) 144 : cluster [DBG] 11.10 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:43:36.877341+0000 osd.0 (osd.0) 145 : cluster [DBG] 11.10 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 1196032 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 145) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:43:36.863203+0000 osd.0 (osd.0) 144 : cluster [DBG] 11.10 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:43:36.877341+0000 osd.0 (osd.0) 145 : cluster [DBG] 11.10 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:08.045283+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 147 sent 145 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:43:37.868225+0000 osd.0 (osd.0) 146 : cluster [DBG] 11.17 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:43:37.882303+0000 osd.0 (osd.0) 147 : cluster [DBG] 11.17 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 1196032 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 147) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:43:37.868225+0000 osd.0 (osd.0) 146 : cluster [DBG] 11.17 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:43:37.882303+0000 osd.0 (osd.0) 147 : cluster [DBG] 11.17 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:09.045424+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67977216 unmapped: 1171456 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:10.045556+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67977216 unmapped: 1171456 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:11.046345+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67985408 unmapped: 1163264 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 804357 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:12.046472+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67985408 unmapped: 1163264 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:13.046629+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67993600 unmapped: 1155072 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:14.046756+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67993600 unmapped: 1155072 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:15.046897+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 67993600 unmapped: 1155072 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:16.047053+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 1146880 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 804357 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:17.048572+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 1146880 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:18.048679+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 1146880 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 10.d scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.935089111s of 11.939572334s, submitted: 4
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 10.d scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:19.048779+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 149 sent 147 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:43:48.803073+0000 osd.0 (osd.0) 148 : cluster [DBG] 10.d scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:43:48.820441+0000 osd.0 (osd.0) 149 : cluster [DBG] 10.d scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 1146880 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 149) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:43:48.803073+0000 osd.0 (osd.0) 148 : cluster [DBG] 10.d scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:43:48.820441+0000 osd.0 (osd.0) 149 : cluster [DBG] 10.d scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:20.048903+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 1146880 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:21.049006+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68026368 unmapped: 1122304 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 805505 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:22.049096+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 1114112 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:23.049297+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 1114112 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:24.049400+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 1105920 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:25.049522+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 1105920 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:26.049663+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 151 sent 149 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:43:55.786769+0000 osd.0 (osd.0) 150 : cluster [DBG] 10.7 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:43:55.800911+0000 osd.0 (osd.0) 151 : cluster [DBG] 10.7 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 1097728 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 806653 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 151) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:43:55.786769+0000 osd.0 (osd.0) 150 : cluster [DBG] 10.7 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:43:55.800911+0000 osd.0 (osd.0) 151 : cluster [DBG] 10.7 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:27.049824+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 1097728 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:28.049928+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 1097728 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:29.050063+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68059136 unmapped: 1089536 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:30.050164+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68059136 unmapped: 1089536 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.972928047s of 11.977856636s, submitted: 4
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:31.050282+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 153 sent 151 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:44:00.780663+0000 osd.0 (osd.0) 152 : cluster [DBG] 10.4 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:44:00.794773+0000 osd.0 (osd.0) 153 : cluster [DBG] 10.4 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 1081344 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 807801 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 153) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:44:00.780663+0000 osd.0 (osd.0) 152 : cluster [DBG] 10.4 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:44:00.794773+0000 osd.0 (osd.0) 153 : cluster [DBG] 10.4 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:32.050444+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 1073152 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:33.050546+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 1073152 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 10.8 deep-scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 10.8 deep-scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:34.050659+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 155 sent 153 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:44:03.806521+0000 osd.0 (osd.0) 154 : cluster [DBG] 10.8 deep-scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:44:03.820521+0000 osd.0 (osd.0) 155 : cluster [DBG] 10.8 deep-scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 155) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:44:03.806521+0000 osd.0 (osd.0) 154 : cluster [DBG] 10.8 deep-scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:44:03.820521+0000 osd.0 (osd.0) 155 : cluster [DBG] 10.8 deep-scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:35.050868+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 157 sent 155 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:44:04.777966+0000 osd.0 (osd.0) 156 : cluster [DBG] 10.1 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:44:04.792112+0000 osd.0 (osd.0) 157 : cluster [DBG] 10.1 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 157) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:44:04.777966+0000 osd.0 (osd.0) 156 : cluster [DBG] 10.1 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:44:04.792112+0000 osd.0 (osd.0) 157 : cluster [DBG] 10.1 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:36.050983+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 159 sent 157 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:44:05.822973+0000 osd.0 (osd.0) 158 : cluster [DBG] 10.15 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:44:05.840673+0000 osd.0 (osd.0) 159 : cluster [DBG] 10.15 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 1056768 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 811246 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 159) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:44:05.822973+0000 osd.0 (osd.0) 158 : cluster [DBG] 10.15 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:44:05.840673+0000 osd.0 (osd.0) 159 : cluster [DBG] 10.15 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:37.051101+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 1056768 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 10.e scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 10.e scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:38.051197+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 161 sent 159 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:44:07.866337+0000 osd.0 (osd.0) 160 : cluster [DBG] 10.e scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:44:07.884045+0000 osd.0 (osd.0) 161 : cluster [DBG] 10.e scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 1056768 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 161) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:44:07.866337+0000 osd.0 (osd.0) 160 : cluster [DBG] 10.e scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:44:07.884045+0000 osd.0 (osd.0) 161 : cluster [DBG] 10.e scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:39.051326+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 163 sent 161 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:44:08.882532+0000 osd.0 (osd.0) 162 : cluster [DBG] 10.16 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:44:08.896588+0000 osd.0 (osd.0) 163 : cluster [DBG] 10.16 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 1048576 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 163) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:44:08.882532+0000 osd.0 (osd.0) 162 : cluster [DBG] 10.16 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:44:08.896588+0000 osd.0 (osd.0) 163 : cluster [DBG] 10.16 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:40.051449+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 1048576 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 10.9 deep-scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.056323051s of 10.070478439s, submitted: 12
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 10.9 deep-scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:41.051558+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 165 sent 163 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:44:10.851224+0000 osd.0 (osd.0) 164 : cluster [DBG] 10.9 deep-scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:44:10.868866+0000 osd.0 (osd.0) 165 : cluster [DBG] 10.9 deep-scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 814691 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 165) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:44:10.851224+0000 osd.0 (osd.0) 164 : cluster [DBG] 10.9 deep-scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:44:10.868866+0000 osd.0 (osd.0) 165 : cluster [DBG] 10.9 deep-scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:42.051686+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:43.051789+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 167 sent 165 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:44:12.820115+0000 osd.0 (osd.0) 166 : cluster [DBG] 10.17 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:44:12.834250+0000 osd.0 (osd.0) 167 : cluster [DBG] 10.17 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 167) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:44:12.820115+0000 osd.0 (osd.0) 166 : cluster [DBG] 10.17 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:44:12.834250+0000 osd.0 (osd.0) 167 : cluster [DBG] 10.17 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:44.051938+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:45.052117+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:46.052280+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 815840 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:47.052456+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:48.052585+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:49.052681+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 169 sent 167 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:44:18.854298+0000 osd.0 (osd.0) 168 : cluster [DBG] 10.1e scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:44:18.868443+0000 osd.0 (osd.0) 169 : cluster [DBG] 10.1e scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 169) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:44:18.854298+0000 osd.0 (osd.0) 168 : cluster [DBG] 10.1e scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:44:18.868443+0000 osd.0 (osd.0) 169 : cluster [DBG] 10.1e scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:50.052825+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:51.052942+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 816989 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:52.053046+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:53.053150+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:54.053260+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68165632 unmapped: 983040 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:55.053440+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68165632 unmapped: 983040 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:56.053604+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 816989 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:57.053731+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 966656 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 9.d scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.103742599s of 17.110662460s, submitted: 6
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 9.d scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:58.053877+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 171 sent 169 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:44:27.961854+0000 osd.0 (osd.0) 170 : cluster [DBG] 9.d scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:44:28.004248+0000 osd.0 (osd.0) 171 : cluster [DBG] 9.d scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 171) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:44:27.961854+0000 osd.0 (osd.0) 170 : cluster [DBG] 9.d scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:44:28.004248+0000 osd.0 (osd.0) 171 : cluster [DBG] 9.d scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:43:59.054093+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:00.054193+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:01.054300+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 818136 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:02.054399+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 1 last_log 172 sent 171 num 1 unsent 1 sending 1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:44:32.035992+0000 osd.0 (osd.0) 172 : cluster [DBG] 9.9 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 172) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:44:32.035992+0000 osd.0 (osd.0) 172 : cluster [DBG] 9.9 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68222976 unmapped: 925696 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:03.054528+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 1 last_log 173 sent 172 num 1 unsent 1 sending 1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:44:32.067605+0000 osd.0 (osd.0) 173 : cluster [DBG] 9.9 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 173) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:44:32.067605+0000 osd.0 (osd.0) 173 : cluster [DBG] 9.9 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68222976 unmapped: 925696 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:04.054671+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68222976 unmapped: 925696 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:05.054788+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68239360 unmapped: 909312 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:06.054882+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68239360 unmapped: 909312 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 819283 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:07.054993+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 175 sent 173 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:44:36.915892+0000 osd.0 (osd.0) 174 : cluster [DBG] 9.1 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:44:36.954766+0000 osd.0 (osd.0) 175 : cluster [DBG] 9.1 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 175) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:44:36.915892+0000 osd.0 (osd.0) 174 : cluster [DBG] 9.1 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:44:36.954766+0000 osd.0 (osd.0) 175 : cluster [DBG] 9.1 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:08.055184+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:09.055293+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 892928 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:10.055409+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 892928 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.932678223s of 12.940945625s, submitted: 6
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:11.055511+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 177 sent 175 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:44:40.902741+0000 osd.0 (osd.0) 176 : cluster [DBG] 9.3 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:44:40.945115+0000 osd.0 (osd.0) 177 : cluster [DBG] 9.3 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 177) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:44:40.902741+0000 osd.0 (osd.0) 176 : cluster [DBG] 9.3 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:44:40.945115+0000 osd.0 (osd.0) 177 : cluster [DBG] 9.3 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 892928 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 821577 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:12.055667+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 892928 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:13.055769+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 884736 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:14.055870+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 884736 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:15.056021+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 179 sent 177 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:44:44.828585+0000 osd.0 (osd.0) 178 : cluster [DBG] 9.1b scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:44:44.849802+0000 osd.0 (osd.0) 179 : cluster [DBG] 9.1b scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 179) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:44:44.828585+0000 osd.0 (osd.0) 178 : cluster [DBG] 9.1b scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:44:44.849802+0000 osd.0 (osd.0) 179 : cluster [DBG] 9.1b scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:16.056175+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 181 sent 179 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:44:45.781732+0000 osd.0 (osd.0) 180 : cluster [DBG] 9.1d scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:44:45.813524+0000 osd.0 (osd.0) 181 : cluster [DBG] 9.1d scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 181) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:44:45.781732+0000 osd.0 (osd.0) 180 : cluster [DBG] 9.1d scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:44:45.813524+0000 osd.0 (osd.0) 181 : cluster [DBG] 9.1d scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 823873 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:17.056332+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 9.5 deep-scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 9.5 deep-scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:18.056452+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 183 sent 181 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:44:47.782994+0000 osd.0 (osd.0) 182 : cluster [DBG] 9.5 deep-scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:44:47.821838+0000 osd.0 (osd.0) 183 : cluster [DBG] 9.5 deep-scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 183) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:44:47.782994+0000 osd.0 (osd.0) 182 : cluster [DBG] 9.5 deep-scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:44:47.821838+0000 osd.0 (osd.0) 183 : cluster [DBG] 9.5 deep-scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 9.b scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 9.b scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:19.056610+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 185 sent 183 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:44:48.770315+0000 osd.0 (osd.0) 184 : cluster [DBG] 9.b scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:44:48.798589+0000 osd.0 (osd.0) 185 : cluster [DBG] 9.b scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 185) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:44:48.770315+0000 osd.0 (osd.0) 184 : cluster [DBG] 9.b scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:44:48.798589+0000 osd.0 (osd.0) 185 : cluster [DBG] 9.b scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:20.056796+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 187 sent 185 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:44:49.817817+0000 osd.0 (osd.0) 186 : cluster [DBG] 9.11 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:44:49.849692+0000 osd.0 (osd.0) 187 : cluster [DBG] 9.11 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 187) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:44:49.817817+0000 osd.0 (osd.0) 186 : cluster [DBG] 9.11 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:44:49.849692+0000 osd.0 (osd.0) 187 : cluster [DBG] 9.11 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:21.056920+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 827315 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:22.057262+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:23.057363+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.869021416s of 12.890291214s, submitted: 12
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:24.057482+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 189 sent 187 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:44:53.793077+0000 osd.0 (osd.0) 188 : cluster [DBG] 6.3 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:44:53.814320+0000 osd.0 (osd.0) 189 : cluster [DBG] 6.3 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 189) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:44:53.793077+0000 osd.0 (osd.0) 188 : cluster [DBG] 6.3 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:44:53.814320+0000 osd.0 (osd.0) 189 : cluster [DBG] 6.3 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68304896 unmapped: 843776 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:25.057780+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 191 sent 189 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:44:54.752840+0000 osd.0 (osd.0) 190 : cluster [DBG] 6.7 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:44:54.770468+0000 osd.0 (osd.0) 191 : cluster [DBG] 6.7 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 191) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:44:54.752840+0000 osd.0 (osd.0) 190 : cluster [DBG] 6.7 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:44:54.770468+0000 osd.0 (osd.0) 191 : cluster [DBG] 6.7 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68321280 unmapped: 827392 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:26.057917+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 193 sent 191 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:44:55.727351+0000 osd.0 (osd.0) 192 : cluster [DBG] 6.5 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:44:55.748292+0000 osd.0 (osd.0) 193 : cluster [DBG] 6.5 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 193) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:44:55.727351+0000 osd.0 (osd.0) 192 : cluster [DBG] 6.5 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:44:55.748292+0000 osd.0 (osd.0) 193 : cluster [DBG] 6.5 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68321280 unmapped: 827392 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 830756 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:27.058085+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68321280 unmapped: 827392 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:28.058198+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68329472 unmapped: 819200 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:29.058335+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68329472 unmapped: 819200 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:30.058498+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68337664 unmapped: 811008 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 6.9 deep-scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 6.9 deep-scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:31.058664+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 195 sent 193 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:45:00.743618+0000 osd.0 (osd.0) 194 : cluster [DBG] 6.9 deep-scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:45:00.757860+0000 osd.0 (osd.0) 195 : cluster [DBG] 6.9 deep-scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 195) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:45:00.743618+0000 osd.0 (osd.0) 194 : cluster [DBG] 6.9 deep-scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:45:00.757860+0000 osd.0 (osd.0) 195 : cluster [DBG] 6.9 deep-scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 6.a deep-scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 6.a deep-scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68354048 unmapped: 794624 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 833050 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:32.058809+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 197 sent 195 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:45:01.709816+0000 osd.0 (osd.0) 196 : cluster [DBG] 6.a deep-scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:45:01.723939+0000 osd.0 (osd.0) 197 : cluster [DBG] 6.a deep-scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 197) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:45:01.709816+0000 osd.0 (osd.0) 196 : cluster [DBG] 6.a deep-scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:45:01.723939+0000 osd.0 (osd.0) 197 : cluster [DBG] 6.a deep-scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68354048 unmapped: 794624 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:33.058953+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 199 sent 197 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:45:02.687460+0000 osd.0 (osd.0) 198 : cluster [DBG] 9.16 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:45:02.715708+0000 osd.0 (osd.0) 199 : cluster [DBG] 9.16 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 199) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:45:02.687460+0000 osd.0 (osd.0) 198 : cluster [DBG] 9.16 scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:45:02.715708+0000 osd.0 (osd.0) 199 : cluster [DBG] 9.16 scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68378624 unmapped: 770048 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:34.059104+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 201 sent 199 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:45:03.719361+0000 osd.0 (osd.0) 200 : cluster [DBG] 9.1c scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:45:03.758227+0000 osd.0 (osd.0) 201 : cluster [DBG] 9.1c scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 201) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:45:03.719361+0000 osd.0 (osd.0) 200 : cluster [DBG] 9.1c scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:45:03.758227+0000 osd.0 (osd.0) 201 : cluster [DBG] 9.1c scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68378624 unmapped: 770048 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.942304611s of 10.958786011s, submitted: 14
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:35.059239+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  log_queue is 2 last_log 203 sent 201 num 2 unsent 2 sending 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:45:04.751909+0000 osd.0 (osd.0) 202 : cluster [DBG] 9.1e scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  will send 2025-11-26T11:45:04.783680+0000 osd.0 (osd.0) 203 : cluster [DBG] 9.1e scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client handle_log_ack log(last 203) v1
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:45:04.751909+0000 osd.0 (osd.0) 202 : cluster [DBG] 9.1e scrub starts
Nov 26 11:59:09 compute-0 ceph-osd[88091]: log_client  logged 2025-11-26T11:45:04.783680+0000 osd.0 (osd.0) 203 : cluster [DBG] 9.1e scrub ok
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68395008 unmapped: 753664 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:36.059359+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68395008 unmapped: 753664 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:37.059467+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68395008 unmapped: 753664 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:38.059575+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68403200 unmapped: 745472 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:39.059695+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68403200 unmapped: 745472 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:40.059806+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68411392 unmapped: 737280 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:41.059924+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68411392 unmapped: 737280 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:42.060068+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68419584 unmapped: 729088 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:43.060184+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68419584 unmapped: 729088 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:44.060311+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68427776 unmapped: 720896 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:45.060448+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68427776 unmapped: 720896 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:46.060608+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68427776 unmapped: 720896 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:47.060761+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 712704 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:48.060855+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 712704 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:49.060981+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 712704 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:50.061105+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:51.061274+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:52.061399+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 688128 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:53.061523+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 688128 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:54.061649+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 679936 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:55.061768+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 671744 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:56.061867+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 671744 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:57.061973+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68485120 unmapped: 663552 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:58.062105+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68485120 unmapped: 663552 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:44:59.062202+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68485120 unmapped: 663552 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:00.062301+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68493312 unmapped: 655360 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:01.062403+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68493312 unmapped: 655360 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:02.062509+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68493312 unmapped: 655360 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:03.062610+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68501504 unmapped: 647168 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:04.062676+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68501504 unmapped: 647168 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:05.062961+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68509696 unmapped: 638976 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:06.063114+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68509696 unmapped: 638976 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:07.063219+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68509696 unmapped: 638976 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:08.063323+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68509696 unmapped: 638976 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:09.063437+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68509696 unmapped: 638976 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:10.063559+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 630784 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:11.063677+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 630784 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:12.063780+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:13.063888+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:14.063993+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:15.064181+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:16.064295+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:17.064401+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:18.064514+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:19.064650+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 598016 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:20.064763+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 598016 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:21.064861+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 598016 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:22.064989+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68558848 unmapped: 589824 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:23.065101+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68558848 unmapped: 589824 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:24.065227+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 581632 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:25.065347+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 581632 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:26.065453+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 581632 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:27.065554+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 573440 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:28.065661+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 573440 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:29.065766+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 557056 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:30.065857+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 557056 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:31.065950+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68599808 unmapped: 548864 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:32.066052+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68599808 unmapped: 548864 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:33.066204+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68599808 unmapped: 548864 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:34.066322+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 540672 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:35.066452+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 540672 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:36.066548+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 540672 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:37.066658+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 532480 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:38.066776+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 532480 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:39.066883+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68624384 unmapped: 524288 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:40.066999+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68624384 unmapped: 524288 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:41.067111+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68632576 unmapped: 516096 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:42.067276+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68624384 unmapped: 524288 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:43.067412+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68624384 unmapped: 524288 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:44.067516+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68624384 unmapped: 524288 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:45.067673+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68624384 unmapped: 524288 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:46.067816+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68640768 unmapped: 507904 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:47.067961+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68640768 unmapped: 507904 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:48.068098+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 499712 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:49.068214+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 499712 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:50.068321+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 499712 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:51.068424+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 491520 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:52.068524+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 491520 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:53.068655+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 491520 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:54.068760+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 483328 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:55.068910+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 483328 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:56.069030+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 483328 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:57.069133+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68673536 unmapped: 475136 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:58.069278+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68673536 unmapped: 475136 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:45:59.069376+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 458752 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:00.069489+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 458752 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:01.069598+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 458752 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:02.069687+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 450560 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:03.069786+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 450560 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:04.069882+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 450560 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:05.070055+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 442368 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:06.070194+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 442368 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:07.070307+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 434176 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:08.070400+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 434176 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:09.070501+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 434176 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:10.070596+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 425984 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:11.070697+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 425984 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:12.070797+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68730880 unmapped: 417792 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:13.070897+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68730880 unmapped: 417792 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:14.070996+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68730880 unmapped: 417792 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:15.071108+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 409600 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:16.071208+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 409600 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:17.071318+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 409600 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:18.071430+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:19.071589+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:20.071670+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:21.071769+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:22.071881+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:23.071967+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:24.072070+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:25.072188+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:26.072277+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:27.072365+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:28.072482+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:29.072597+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:30.072705+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 368640 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:31.072801+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 368640 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:32.072938+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 368640 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:33.073043+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:34.073157+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:35.073291+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:36.073697+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:37.073841+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:38.073970+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 344064 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:39.074063+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 344064 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:40.074200+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68812800 unmapped: 335872 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:41.074351+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68812800 unmapped: 335872 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:42.074446+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 327680 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:43.074589+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 327680 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:44.074713+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 319488 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:45.074881+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 311296 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:46.074970+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 311296 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:47.075070+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 311296 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:48.075195+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:49.075290+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:50.075381+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68853760 unmapped: 294912 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:51.075478+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:52.075571+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:53.075703+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68870144 unmapped: 278528 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:54.075803+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68870144 unmapped: 278528 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:55.075920+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68870144 unmapped: 278528 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:56.076013+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68878336 unmapped: 270336 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:57.076104+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68878336 unmapped: 270336 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:58.076226+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68886528 unmapped: 262144 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:46:59.076329+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68886528 unmapped: 262144 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:00.076435+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68886528 unmapped: 262144 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:01.076540+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68894720 unmapped: 253952 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:02.076681+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68894720 unmapped: 253952 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:03.076812+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68894720 unmapped: 253952 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:04.076916+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68902912 unmapped: 245760 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:05.077049+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68902912 unmapped: 245760 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:06.077214+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68911104 unmapped: 237568 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:07.077304+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68911104 unmapped: 237568 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:08.077394+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68919296 unmapped: 229376 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:09.077485+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68927488 unmapped: 221184 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:10.077581+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68927488 unmapped: 221184 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:11.077679+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68935680 unmapped: 212992 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:12.077769+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68935680 unmapped: 212992 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:13.077868+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68935680 unmapped: 212992 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:14.077967+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68952064 unmapped: 196608 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:15.078076+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68952064 unmapped: 196608 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:16.078171+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68960256 unmapped: 188416 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:17.078274+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68960256 unmapped: 188416 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:18.078440+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68960256 unmapped: 188416 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:19.078538+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68968448 unmapped: 180224 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:20.078627+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68968448 unmapped: 180224 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:21.078842+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68968448 unmapped: 180224 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:22.078934+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68976640 unmapped: 172032 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:23.079020+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:24.079104+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68976640 unmapped: 172032 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:25.079256+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68993024 unmapped: 155648 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:26.079409+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68993024 unmapped: 155648 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:27.079556+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 68993024 unmapped: 155648 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:28.079655+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69001216 unmapped: 147456 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:29.079741+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69001216 unmapped: 147456 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:30.079833+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69017600 unmapped: 131072 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:31.079924+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69017600 unmapped: 131072 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:32.080038+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69017600 unmapped: 131072 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:33.080142+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69025792 unmapped: 122880 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:34.080326+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69025792 unmapped: 122880 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:35.080532+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69025792 unmapped: 122880 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:36.080624+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 106496 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:37.080745+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 106496 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:38.080839+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 98304 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:39.080946+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 98304 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:40.081045+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 98304 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:41.081161+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69058560 unmapped: 90112 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:42.081250+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69058560 unmapped: 90112 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:43.081340+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69058560 unmapped: 90112 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:44.081429+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 81920 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:45.081536+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 81920 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:46.081646+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 73728 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:47.081739+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 73728 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:48.081831+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 73728 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:49.081922+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 65536 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:50.082011+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 65536 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:51.082105+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 65536 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:52.082204+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 57344 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:53.082298+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 57344 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:54.082400+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 40960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:55.082552+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 40960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:56.082692+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 40960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:57.082782+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69115904 unmapped: 32768 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:58.082880+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69115904 unmapped: 32768 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:47:59.083015+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69115904 unmapped: 32768 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:00.083151+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69124096 unmapped: 24576 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:01.083320+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69124096 unmapped: 24576 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:02.083452+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69132288 unmapped: 16384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:03.083553+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69132288 unmapped: 16384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:04.083668+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69132288 unmapped: 16384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:05.083997+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 8192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:06.084132+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 8192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:07.084228+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 0 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:08.084358+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 0 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:09.084453+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 0 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:10.084549+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69156864 unmapped: 1040384 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:11.084657+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69156864 unmapped: 1040384 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:12.084746+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69173248 unmapped: 1024000 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:13.084836+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69173248 unmapped: 1024000 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:14.084937+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:15.085051+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69189632 unmapped: 1007616 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:16.085154+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69189632 unmapped: 1007616 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:17.085263+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69197824 unmapped: 999424 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:18.085358+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69197824 unmapped: 999424 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:19.085453+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69197824 unmapped: 999424 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:20.085548+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69197824 unmapped: 999424 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:21.085657+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:22.085765+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:23.085876+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69214208 unmapped: 983040 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:24.085981+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69214208 unmapped: 983040 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:25.086112+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69214208 unmapped: 983040 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:26.086217+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69222400 unmapped: 974848 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:27.086327+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69222400 unmapped: 974848 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:28.086429+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69222400 unmapped: 974848 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:29.086536+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69230592 unmapped: 966656 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:30.086676+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69230592 unmapped: 966656 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:31.086775+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69246976 unmapped: 950272 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:32.086884+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69246976 unmapped: 950272 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:33.086999+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69246976 unmapped: 950272 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:34.087228+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69246976 unmapped: 950272 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:35.087488+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69255168 unmapped: 942080 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:36.087589+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69263360 unmapped: 933888 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:37.087690+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69271552 unmapped: 925696 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:38.087788+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69271552 unmapped: 925696 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:39.087887+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69279744 unmapped: 917504 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:40.087989+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69287936 unmapped: 909312 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:41.088093+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69287936 unmapped: 909312 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:42.088221+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69296128 unmapped: 901120 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:43.088338+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69296128 unmapped: 901120 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:44.088433+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69304320 unmapped: 892928 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:45.088558+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69304320 unmapped: 892928 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:46.088690+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69304320 unmapped: 892928 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:47.088829+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69312512 unmapped: 884736 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:48.088925+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69312512 unmapped: 884736 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:49.089050+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69320704 unmapped: 876544 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:50.089147+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69320704 unmapped: 876544 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 5490 writes, 23K keys, 5490 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5490 writes, 826 syncs, 6.65 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5490 writes, 23K keys, 5490 commit groups, 1.0 writes per commit group, ingest: 18.37 MB, 0.03 MB/s
                                           Interval WAL: 5490 writes, 826 syncs, 6.65 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a3da971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a3da971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a3da971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a3da971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.7      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.7      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.7      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a3da971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a3da971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a3da971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a3da97090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a3da97090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.3      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.3      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.3      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a3da97090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a3da971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a3da971f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:51.089274+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 819200 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:52.089363+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69386240 unmapped: 811008 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:53.089464+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69386240 unmapped: 811008 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:54.089562+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69394432 unmapped: 802816 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:55.089712+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69394432 unmapped: 802816 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:56.089808+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69394432 unmapped: 802816 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:57.089915+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69402624 unmapped: 794624 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:58.090035+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69402624 unmapped: 794624 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:48:59.090144+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69410816 unmapped: 786432 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:00.090250+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69419008 unmapped: 778240 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:01.090409+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69419008 unmapped: 778240 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:02.090515+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69427200 unmapped: 770048 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:03.090611+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69427200 unmapped: 770048 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:04.090697+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69427200 unmapped: 770048 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:05.090831+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69435392 unmapped: 761856 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:06.090945+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69435392 unmapped: 761856 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:07.091058+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69435392 unmapped: 761856 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:08.091162+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69443584 unmapped: 753664 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:09.091329+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69451776 unmapped: 745472 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:10.091497+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69459968 unmapped: 737280 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:11.091605+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69459968 unmapped: 737280 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:12.091713+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69459968 unmapped: 737280 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:13.091811+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69468160 unmapped: 729088 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:14.091918+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69468160 unmapped: 729088 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:15.092046+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69468160 unmapped: 729088 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:16.092140+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69484544 unmapped: 712704 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:17.092255+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69484544 unmapped: 712704 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:18.092387+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69492736 unmapped: 704512 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:19.092510+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69492736 unmapped: 704512 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:20.092591+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69492736 unmapped: 704512 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:21.092752+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69500928 unmapped: 696320 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:22.092858+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69500928 unmapped: 696320 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:23.093014+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 688128 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:24.093144+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 688128 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:25.093285+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 688128 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:26.093388+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69525504 unmapped: 671744 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:27.093486+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69525504 unmapped: 671744 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:28.093732+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69533696 unmapped: 663552 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:29.093831+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69533696 unmapped: 663552 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:30.093945+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69541888 unmapped: 655360 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:31.094054+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69541888 unmapped: 655360 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:32.094156+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69541888 unmapped: 655360 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:33.094311+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69550080 unmapped: 647168 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:34.094479+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69550080 unmapped: 647168 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:35.094679+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69550080 unmapped: 647168 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:36.094842+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69558272 unmapped: 638976 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:37.095009+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69558272 unmapped: 638976 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:38.095167+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69558272 unmapped: 638976 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:39.095258+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69566464 unmapped: 630784 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:40.095350+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69566464 unmapped: 630784 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:41.095452+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69574656 unmapped: 622592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:42.095556+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69574656 unmapped: 622592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:43.095671+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 308.484497070s of 308.486755371s, submitted: 2
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69632000 unmapped: 565248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:44.095789+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69926912 unmapped: 1318912 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:45.095921+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69926912 unmapped: 1318912 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:46.096008+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69943296 unmapped: 1302528 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:47.096107+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69943296 unmapped: 1302528 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:48.096209+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69943296 unmapped: 1302528 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:49.096309+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69943296 unmapped: 1302528 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:50.096403+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69943296 unmapped: 1302528 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:51.096499+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69943296 unmapped: 1302528 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:52.096586+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69943296 unmapped: 1302528 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:53.096687+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 1294336 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:54.096788+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 1294336 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:55.096904+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 1294336 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:56.096996+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 1294336 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:57.097101+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 1294336 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:58.097192+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69959680 unmapped: 1286144 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:49:59.097284+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69959680 unmapped: 1286144 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:00.097382+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69959680 unmapped: 1286144 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:01.097488+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69976064 unmapped: 1269760 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:02.097594+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69976064 unmapped: 1269760 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:03.097736+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69984256 unmapped: 1261568 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:04.097840+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69984256 unmapped: 1261568 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:05.097995+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69992448 unmapped: 1253376 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:06.098098+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 69992448 unmapped: 1253376 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:07.098200+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70000640 unmapped: 1245184 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:08.098313+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70000640 unmapped: 1245184 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:09.098424+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70000640 unmapped: 1245184 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:10.098527+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70008832 unmapped: 1236992 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:11.098689+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70008832 unmapped: 1236992 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:12.098783+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70049792 unmapped: 1196032 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:13.098876+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70049792 unmapped: 1196032 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:14.098981+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70057984 unmapped: 1187840 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:15.099099+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70066176 unmapped: 1179648 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:16.099206+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70066176 unmapped: 1179648 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:17.099315+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 1171456 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:18.099424+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 1171456 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:19.099528+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 1171456 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:20.099650+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 1163264 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:21.099761+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70098944 unmapped: 1146880 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:22.099866+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70098944 unmapped: 1146880 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:23.099954+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70107136 unmapped: 1138688 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:24.100076+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70107136 unmapped: 1138688 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:25.100231+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 1130496 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:26.100334+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 1130496 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:27.100662+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 1130496 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:28.100752+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 1122304 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:29.100845+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 1122304 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:30.100955+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 1114112 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:31.101055+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 1114112 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:32.101152+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70139904 unmapped: 1105920 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:33.101250+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70139904 unmapped: 1105920 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:34.101381+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70148096 unmapped: 1097728 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:35.101512+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70148096 unmapped: 1097728 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:36.101817+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70148096 unmapped: 1097728 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:37.101910+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:38.102018+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:39.102118+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70172672 unmapped: 1073152 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:40.102230+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70172672 unmapped: 1073152 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:41.102329+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 1056768 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:42.102438+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 1048576 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:43.102542+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 1048576 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:44.102649+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 1048576 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:45.102761+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70205440 unmapped: 1040384 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:46.102856+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 1032192 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:47.102952+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70221824 unmapped: 1024000 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:48.103083+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70221824 unmapped: 1024000 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:49.103217+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70230016 unmapped: 1015808 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:50.103310+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70230016 unmapped: 1015808 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:51.103405+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70230016 unmapped: 1015808 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:52.103499+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70230016 unmapped: 1015808 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:53.103598+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70230016 unmapped: 1015808 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:54.103678+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70230016 unmapped: 1015808 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:55.103787+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70230016 unmapped: 1015808 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:56.103891+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70230016 unmapped: 1015808 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:57.103983+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70230016 unmapped: 1015808 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:58.104075+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70230016 unmapped: 1015808 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:50:59.104186+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70230016 unmapped: 1015808 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:00.104290+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70230016 unmapped: 1015808 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:01.104395+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70246400 unmapped: 999424 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:02.104490+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70246400 unmapped: 999424 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:03.104594+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70246400 unmapped: 999424 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:04.104672+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70246400 unmapped: 999424 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:05.104788+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70246400 unmapped: 999424 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:06.104917+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70246400 unmapped: 999424 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:07.105044+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70246400 unmapped: 999424 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:08.105164+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70246400 unmapped: 999424 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:09.105298+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70246400 unmapped: 999424 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:10.105422+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70246400 unmapped: 999424 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:11.105526+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:12.105631+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:13.105742+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:14.105855+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:15.105969+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:16.106062+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:17.106162+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:18.106265+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:19.106383+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:20.106480+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:21.106601+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:22.106680+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:23.106791+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:24.106886+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:25.106978+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:26.107094+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:27.107218+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:28.107314+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:29.107417+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:30.107506+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:31.107603+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:32.107665+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:33.107786+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:34.107899+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:35.108016+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:36.108111+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:37.108225+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:38.108388+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:39.108495+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:40.108604+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:41.108680+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:42.111779+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:43.111885+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:44.111984+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 933888 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:45.112100+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 933888 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:46.112224+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 933888 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:47.112319+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 933888 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:48.112421+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 933888 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:49.112522+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 933888 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:50.112654+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 933888 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:51.112801+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 933888 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:52.112952+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 933888 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:53.113072+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 933888 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:54.113219+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 933888 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:55.113371+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 933888 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:56.113499+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 933888 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:57.113692+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 933888 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:58.113815+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 933888 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:51:59.113970+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 933888 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:00.114078+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 933888 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:01.114216+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70328320 unmapped: 917504 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:02.114341+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70328320 unmapped: 917504 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:03.114453+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70328320 unmapped: 917504 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:04.114558+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70328320 unmapped: 917504 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:05.114712+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70328320 unmapped: 917504 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:06.114827+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70328320 unmapped: 917504 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:07.114956+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:08.115072+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:09.115197+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:10.115311+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:11.115437+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:12.115564+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:13.115692+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:14.115788+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:15.115948+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:16.116058+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:17.116194+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:18.116334+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:19.116486+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:20.116580+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:21.116729+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:22.116856+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:23.116964+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:24.117061+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:25.117224+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:26.117357+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:27.117501+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:28.117610+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:29.117740+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:30.117882+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:31.118046+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:32.118179+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:33.118323+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:34.118463+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 933888 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:35.118623+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 933888 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:36.118814+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 933888 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:37.118957+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 933888 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:38.119060+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 933888 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:39.119197+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 933888 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:40.119330+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 933888 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:41.119467+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70320128 unmapped: 925696 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:42.119572+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70320128 unmapped: 925696 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:43.119665+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70320128 unmapped: 925696 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:44.119767+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70328320 unmapped: 917504 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:45.119880+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70328320 unmapped: 917504 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:46.119981+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70328320 unmapped: 917504 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:47.120079+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70328320 unmapped: 917504 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:48.120129+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70328320 unmapped: 917504 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:49.120228+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70328320 unmapped: 917504 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:50.120331+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70328320 unmapped: 917504 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:51.120452+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 901120 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:52.120539+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 901120 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:53.120662+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 901120 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:54.120815+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 901120 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:55.120949+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 901120 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:56.121068+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 901120 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:57.121231+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 901120 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:58.121319+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 901120 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:52:59.121444+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 901120 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:00.121579+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 901120 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:01.121738+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 901120 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:02.121869+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 901120 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:03.121965+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 901120 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:04.122121+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 901120 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:05.122255+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 901120 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:06.122375+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 901120 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:07.122494+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 901120 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:08.122613+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 901120 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:09.122697+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 901120 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:10.122846+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70369280 unmapped: 876544 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:11.122993+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70369280 unmapped: 876544 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:12.123146+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70369280 unmapped: 876544 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:13.123280+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70369280 unmapped: 876544 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:14.123385+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70369280 unmapped: 876544 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:15.123500+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70369280 unmapped: 876544 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:16.123616+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70369280 unmapped: 876544 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:17.123774+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70369280 unmapped: 876544 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:18.123874+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70377472 unmapped: 868352 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:19.123998+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 860160 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:20.124097+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 860160 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:21.124220+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 860160 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:22.124343+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 860160 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:23.124501+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 860160 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:24.124686+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 860160 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:25.124842+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 860160 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:26.124968+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 860160 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:27.125105+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 860160 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:28.125236+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 860160 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:29.125366+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 860160 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:30.125495+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70402048 unmapped: 843776 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:31.125681+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70402048 unmapped: 843776 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:32.125840+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70402048 unmapped: 843776 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:33.125952+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70402048 unmapped: 843776 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:34.126076+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70402048 unmapped: 843776 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:35.126219+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70402048 unmapped: 843776 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:36.126340+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70402048 unmapped: 843776 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:37.126463+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70402048 unmapped: 843776 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:38.126567+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70402048 unmapped: 843776 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:39.126687+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70402048 unmapped: 843776 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:40.126798+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70402048 unmapped: 843776 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:41.126911+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70402048 unmapped: 843776 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:42.127020+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70402048 unmapped: 843776 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:43.127138+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70402048 unmapped: 843776 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:44.127260+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70402048 unmapped: 843776 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:45.127401+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70402048 unmapped: 843776 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:46.127557+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 835584 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:47.127722+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 835584 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:48.127815+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 835584 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:49.127908+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 835584 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:50.128033+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 819200 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:51.128127+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 819200 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:52.128235+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 819200 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:53.128340+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 819200 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:54.128437+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 819200 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:55.128595+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: mgrc ms_handle_reset ms_handle_reset con 0x563a3e8e7c00
Nov 26 11:59:09 compute-0 ceph-osd[88091]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/981219021
Nov 26 11:59:09 compute-0 ceph-osd[88091]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/981219021,v1:192.168.122.100:6801/981219021]
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: get_auth_request con 0x563a401dd800 auth_method 0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: mgrc handle_mgr_configure stats_period=5
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70705152 unmapped: 540672 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:56.129356+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70705152 unmapped: 540672 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:57.129488+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70705152 unmapped: 540672 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:58.130136+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70705152 unmapped: 540672 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 ms_handle_reset con 0x563a3fdb8c00 session 0x563a3e871680
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: handle_auth_request added challenge on 0x563a41b8c000
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:53:59.130244+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 532480 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:00.130409+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 532480 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:01.130518+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 532480 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:02.130628+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 532480 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:03.130768+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 532480 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:04.130881+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 532480 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:05.131380+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 532480 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:06.131504+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 532480 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:07.131683+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 532480 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:08.132137+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 532480 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:09.132273+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 532480 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:10.132388+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 532480 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:11.132516+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70721536 unmapped: 524288 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:12.132667+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70721536 unmapped: 524288 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:13.132791+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70721536 unmapped: 524288 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:14.132908+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70721536 unmapped: 524288 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:15.133074+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70737920 unmapped: 507904 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:16.133225+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70737920 unmapped: 507904 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:17.133375+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70737920 unmapped: 507904 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:18.133524+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70737920 unmapped: 507904 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:19.133674+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70737920 unmapped: 507904 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:20.133800+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70737920 unmapped: 507904 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:21.133948+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70737920 unmapped: 507904 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:22.134098+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70737920 unmapped: 507904 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:23.134222+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70737920 unmapped: 507904 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:24.134362+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70737920 unmapped: 507904 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:25.134529+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70737920 unmapped: 507904 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:26.134680+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70737920 unmapped: 507904 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:27.134829+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70737920 unmapped: 507904 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:28.134958+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70737920 unmapped: 507904 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:29.135091+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70737920 unmapped: 507904 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:30.135240+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70737920 unmapped: 507904 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:31.135394+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70737920 unmapped: 507904 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:32.135521+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70737920 unmapped: 507904 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:33.135884+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70737920 unmapped: 507904 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:34.136025+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70737920 unmapped: 507904 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:35.136165+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70754304 unmapped: 491520 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:36.136257+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70754304 unmapped: 491520 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:37.136376+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70754304 unmapped: 491520 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:38.136497+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70754304 unmapped: 491520 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:39.136622+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70754304 unmapped: 491520 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:40.136743+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70754304 unmapped: 491520 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:41.136871+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70754304 unmapped: 491520 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:42.136964+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70754304 unmapped: 491520 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:43.137081+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70754304 unmapped: 491520 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:44.137207+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70754304 unmapped: 491520 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:45.137340+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70754304 unmapped: 491520 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:46.137460+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70754304 unmapped: 491520 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:47.137580+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70754304 unmapped: 491520 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:48.137673+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70754304 unmapped: 491520 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:49.137801+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70754304 unmapped: 491520 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:50.137938+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70754304 unmapped: 491520 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:51.138077+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70754304 unmapped: 491520 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:52.138211+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70754304 unmapped: 491520 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:53.138338+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70754304 unmapped: 491520 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:54.138437+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70754304 unmapped: 491520 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:55.138551+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70770688 unmapped: 475136 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:56.138712+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70770688 unmapped: 475136 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:57.138810+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70770688 unmapped: 475136 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:58.138909+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70770688 unmapped: 475136 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:54:59.139078+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70778880 unmapped: 466944 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:00.139221+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70778880 unmapped: 466944 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:01.139339+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70778880 unmapped: 466944 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:02.139476+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70778880 unmapped: 466944 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:03.139621+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70778880 unmapped: 466944 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:04.139776+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70778880 unmapped: 466944 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:05.139918+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70778880 unmapped: 466944 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:06.140038+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70778880 unmapped: 466944 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:07.140172+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70778880 unmapped: 466944 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:08.140310+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70778880 unmapped: 466944 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:09.140435+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70778880 unmapped: 466944 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:10.140574+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70778880 unmapped: 466944 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:11.140711+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70778880 unmapped: 466944 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:12.140852+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70778880 unmapped: 466944 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:13.140988+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70778880 unmapped: 466944 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:14.141089+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70778880 unmapped: 466944 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:15.141238+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70795264 unmapped: 450560 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:16.141378+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70795264 unmapped: 450560 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:17.141513+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70795264 unmapped: 450560 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:18.141701+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:19.141863+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70795264 unmapped: 450560 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:20.141999+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70795264 unmapped: 450560 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:21.142153+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70795264 unmapped: 450560 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:22.142300+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70795264 unmapped: 450560 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:23.142437+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70795264 unmapped: 450560 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:24.142551+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70795264 unmapped: 450560 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:25.142700+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70795264 unmapped: 450560 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:26.142835+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70795264 unmapped: 450560 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:27.142974+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70803456 unmapped: 442368 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:28.143112+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70803456 unmapped: 442368 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:29.143233+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70803456 unmapped: 442368 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:30.143362+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70803456 unmapped: 442368 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:31.143462+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70803456 unmapped: 442368 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:32.143619+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70803456 unmapped: 442368 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:33.143771+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70803456 unmapped: 442368 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:34.143913+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70803456 unmapped: 442368 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:35.144077+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70803456 unmapped: 442368 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:36.144233+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70803456 unmapped: 442368 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:37.144395+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70803456 unmapped: 442368 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:38.144554+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70803456 unmapped: 442368 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:39.144670+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70811648 unmapped: 434176 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:40.144790+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70811648 unmapped: 434176 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:41.144907+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70811648 unmapped: 434176 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:42.145037+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70811648 unmapped: 434176 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:43.145165+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70811648 unmapped: 434176 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:44.145264+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70811648 unmapped: 434176 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:45.145391+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70811648 unmapped: 434176 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:46.145516+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70811648 unmapped: 434176 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:47.145664+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70811648 unmapped: 434176 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:48.145796+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70811648 unmapped: 434176 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:49.145924+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70811648 unmapped: 434176 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:50.146012+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70811648 unmapped: 434176 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:51.146143+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70811648 unmapped: 434176 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:52.146265+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70811648 unmapped: 434176 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:53.146392+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70811648 unmapped: 434176 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:54.146517+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70811648 unmapped: 434176 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:55.146671+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70811648 unmapped: 434176 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:56.146796+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70811648 unmapped: 434176 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:57.146957+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70811648 unmapped: 434176 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:58.147078+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70811648 unmapped: 434176 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:55:59.147203+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70811648 unmapped: 434176 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:00.147294+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70811648 unmapped: 434176 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:01.147391+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70811648 unmapped: 434176 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:02.147488+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70811648 unmapped: 434176 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:03.147582+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70811648 unmapped: 434176 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:04.147731+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70811648 unmapped: 434176 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:05.147871+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70811648 unmapped: 434176 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:06.147987+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70811648 unmapped: 434176 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:07.148131+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70811648 unmapped: 434176 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:08.148260+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70811648 unmapped: 434176 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:09.148389+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70811648 unmapped: 434176 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:10.148507+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70811648 unmapped: 434176 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:11.148656+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70819840 unmapped: 425984 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:12.148784+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 417792 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:13.148913+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 417792 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:14.149014+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 417792 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:15.150020+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 417792 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:16.150190+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 417792 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:17.150351+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 417792 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:18.150480+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 417792 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:19.150593+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 417792 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:20.150731+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 417792 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:21.150890+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 417792 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:22.151045+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 417792 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:23.151166+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 417792 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:24.151283+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 417792 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:25.151453+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 417792 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:26.151583+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70844416 unmapped: 401408 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:27.151740+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70844416 unmapped: 401408 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:28.151836+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70844416 unmapped: 401408 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:29.151960+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70844416 unmapped: 401408 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:30.152072+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70844416 unmapped: 401408 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:31.152200+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70844416 unmapped: 401408 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:32.152352+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70844416 unmapped: 401408 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:33.152463+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70844416 unmapped: 401408 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:34.152620+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70844416 unmapped: 401408 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:35.152802+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70844416 unmapped: 401408 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:36.152925+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70844416 unmapped: 401408 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:37.153068+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70844416 unmapped: 401408 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:38.153176+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70844416 unmapped: 401408 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:39.153291+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70844416 unmapped: 401408 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:40.153440+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70844416 unmapped: 401408 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:41.153551+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70844416 unmapped: 401408 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:42.153706+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70844416 unmapped: 401408 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:43.154062+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70844416 unmapped: 401408 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:44.154228+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70844416 unmapped: 401408 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:45.154378+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70844416 unmapped: 401408 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:46.154537+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70860800 unmapped: 385024 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:47.154677+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 376832 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:48.154841+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 376832 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:49.154956+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 376832 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:50.155048+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 376832 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:51.155183+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 376832 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:52.155336+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 376832 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:53.155499+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 376832 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:54.155690+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70877184 unmapped: 368640 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:55.155874+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70877184 unmapped: 368640 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:56.155994+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70885376 unmapped: 360448 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:57.156168+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70885376 unmapped: 360448 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:58.156299+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70885376 unmapped: 360448 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:56:59.156403+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70885376 unmapped: 360448 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:00.156551+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70885376 unmapped: 360448 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:01.156685+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70885376 unmapped: 360448 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:02.156820+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70885376 unmapped: 360448 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:03.156940+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70885376 unmapped: 360448 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:04.157067+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70885376 unmapped: 360448 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:05.157186+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70885376 unmapped: 360448 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:06.157296+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70901760 unmapped: 344064 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:07.157404+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70901760 unmapped: 344064 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:08.157498+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70901760 unmapped: 344064 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:09.157613+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70901760 unmapped: 344064 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:10.157735+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70901760 unmapped: 344064 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:11.157837+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70901760 unmapped: 344064 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:12.157963+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70909952 unmapped: 335872 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:13.158089+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70909952 unmapped: 335872 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:14.158210+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70909952 unmapped: 335872 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:15.158349+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70909952 unmapped: 335872 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:16.158469+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70909952 unmapped: 335872 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:17.158618+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70909952 unmapped: 335872 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:18.158721+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70909952 unmapped: 335872 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:19.158919+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70918144 unmapped: 327680 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:20.159070+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70918144 unmapped: 327680 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:21.159203+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70918144 unmapped: 327680 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:22.159331+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70918144 unmapped: 327680 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:23.159425+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70918144 unmapped: 327680 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:24.159516+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70918144 unmapped: 327680 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:25.159652+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70918144 unmapped: 327680 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:26.159747+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70934528 unmapped: 311296 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:27.159860+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70934528 unmapped: 311296 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:28.159971+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70934528 unmapped: 311296 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:29.160079+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70934528 unmapped: 311296 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:30.160676+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70934528 unmapped: 311296 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:31.160801+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70934528 unmapped: 311296 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:32.160929+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70934528 unmapped: 311296 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:33.161017+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70934528 unmapped: 311296 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:34.161106+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70934528 unmapped: 311296 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:35.161214+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70934528 unmapped: 311296 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:36.161317+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70852608 unmapped: 393216 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:37.161411+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70852608 unmapped: 393216 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:38.161518+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70852608 unmapped: 393216 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:39.161643+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70852608 unmapped: 393216 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:40.161692+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70852608 unmapped: 393216 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:41.161822+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70852608 unmapped: 393216 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:42.161934+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70852608 unmapped: 393216 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:43.162051+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70852608 unmapped: 393216 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:44.162164+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70852608 unmapped: 393216 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:45.162312+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70852608 unmapped: 393216 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:46.162478+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70852608 unmapped: 393216 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:47.162676+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70852608 unmapped: 393216 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:48.162799+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70852608 unmapped: 393216 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:49.162915+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70860800 unmapped: 385024 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:50.163074+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70860800 unmapped: 385024 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:51.163185+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70860800 unmapped: 385024 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:52.163277+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70860800 unmapped: 385024 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:53.163397+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70860800 unmapped: 385024 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:54.163526+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70860800 unmapped: 385024 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:55.163675+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70860800 unmapped: 385024 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:56.163775+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70860800 unmapped: 385024 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:57.163880+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70860800 unmapped: 385024 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:58.164012+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70860800 unmapped: 385024 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:57:59.164154+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70860800 unmapped: 385024 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:00.164258+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70860800 unmapped: 385024 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:01.164362+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70860800 unmapped: 385024 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:02.164472+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70860800 unmapped: 385024 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:03.164575+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70860800 unmapped: 385024 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:04.164683+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70860800 unmapped: 385024 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:05.164832+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70860800 unmapped: 385024 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:06.164965+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70860800 unmapped: 385024 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:07.165086+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70860800 unmapped: 385024 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:08.165208+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70860800 unmapped: 385024 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:09.165311+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70860800 unmapped: 385024 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:10.165416+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70860800 unmapped: 385024 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:11.165512+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 376832 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:12.165608+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 376832 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:13.165667+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 376832 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:14.165806+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 376832 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:15.165942+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 376832 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:16.166031+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 376832 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:17.166126+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 376832 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:18.166224+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 376832 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:19.166339+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 376832 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:20.166431+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 376832 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:21.166534+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 376832 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:22.166652+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 376832 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:23.166762+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 376832 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:24.166900+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 376832 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:25.167015+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 376832 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:26.167102+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 376832 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:27.167198+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 376832 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:28.167289+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 376832 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:29.167387+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 376832 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:30.167502+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 376832 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:31.167606+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 376832 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:32.167691+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 376832 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:33.167789+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 376832 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:34.167879+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 376832 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:35.167989+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 376832 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:36.168082+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 376832 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:37.168174+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: do_command 'config diff' '{prefix=config diff}'
Nov 26 11:59:09 compute-0 ceph-osd[88091]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 71065600 unmapped: 180224 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: do_command 'config show' '{prefix=config show}'
Nov 26 11:59:09 compute-0 ceph-osd[88091]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 26 11:59:09 compute-0 ceph-osd[88091]: do_command 'counter dump' '{prefix=counter dump}'
Nov 26 11:59:09 compute-0 ceph-osd[88091]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 26 11:59:09 compute-0 ceph-osd[88091]: do_command 'counter schema' '{prefix=counter schema}'
Nov 26 11:59:09 compute-0 ceph-osd[88091]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:38.168266+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 11:59:09 compute-0 ceph-osd[88091]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 11:59:09 compute-0 ceph-osd[88091]: prioritycache tune_memory target: 4294967296 mapped: 71589888 unmapped: 1753088 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 11:59:09 compute-0 ceph-osd[88091]: bluestore.MempoolThread(0x563a3db75b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836494 data_alloc: 218103808 data_used: 159744
Nov 26 11:59:09 compute-0 ceph-osd[88091]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xaa9a3/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: tick
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_tickets
Nov 26 11:59:09 compute-0 ceph-osd[88091]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T11:58:39.168353+0000)
Nov 26 11:59:09 compute-0 ceph-osd[88091]: do_command 'log dump' '{prefix=log dump}'
Nov 26 11:59:09 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v718: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:59:09 compute-0 ceph-mon[74928]: from='client.14529 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 26 11:59:09 compute-0 ceph-mon[74928]: from='client.14533 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 26 11:59:09 compute-0 ceph-mon[74928]: from='client.14537 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 26 11:59:09 compute-0 ceph-mon[74928]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 26 11:59:09 compute-0 ceph-mon[74928]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 26 11:59:09 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/2985465215' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 26 11:59:09 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Nov 26 11:59:09 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2603793128' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 26 11:59:10 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df"} v 0) v1
Nov 26 11:59:10 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2922619689' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 26 11:59:10 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Nov 26 11:59:10 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2745596728' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 26 11:59:10 compute-0 ceph-mon[74928]: from='client.14547 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:59:10 compute-0 ceph-mon[74928]: pgmap v718: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:59:10 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/2603793128' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 26 11:59:10 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/2922619689' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 26 11:59:10 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/2745596728' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 26 11:59:10 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Nov 26 11:59:10 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3636970943' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 26 11:59:11 compute-0 ceph-mgr[75197]: log_channel(audit) log [DBG] : from='client.14557 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:59:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:59:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:59:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:59:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:59:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 11:59:11 compute-0 ceph-mgr[75197]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 11:59:11 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat"} v 0) v1
Nov 26 11:59:11 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1675766366' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 26 11:59:11 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 11:59:11 compute-0 systemd[1]: Starting Hostname Service...
Nov 26 11:59:11 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v719: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:59:11 compute-0 systemd[1]: Started Hostname Service.
Nov 26 11:59:11 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3636970943' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 26 11:59:11 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1675766366' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 26 11:59:11 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump"} v 0) v1
Nov 26 11:59:11 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1144981901' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 26 11:59:12 compute-0 ceph-mgr[75197]: log_channel(audit) log [DBG] : from='client.14563 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:59:12 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls"} v 0) v1
Nov 26 11:59:12 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1647321094' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 26 11:59:12 compute-0 ceph-mgr[75197]: log_channel(audit) log [DBG] : from='client.14567 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:59:12 compute-0 ceph-mon[74928]: from='client.14557 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:59:12 compute-0 ceph-mon[74928]: pgmap v719: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:59:12 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1144981901' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 26 11:59:12 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1647321094' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 26 11:59:13 compute-0 ceph-mgr[75197]: log_channel(audit) log [DBG] : from='client.14569 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:59:13 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump"} v 0) v1
Nov 26 11:59:13 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1349987735' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 26 11:59:13 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status"} v 0) v1
Nov 26 11:59:13 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1565412889' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 26 11:59:13 compute-0 ceph-mgr[75197]: log_channel(cluster) log [DBG] : pgmap v720: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:59:13 compute-0 ceph-mon[74928]: from='client.14563 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:59:13 compute-0 ceph-mon[74928]: from='client.14567 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:59:13 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1349987735' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 26 11:59:13 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1565412889' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 26 11:59:13 compute-0 ceph-mgr[75197]: log_channel(audit) log [DBG] : from='client.14575 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:59:14 compute-0 ceph-mgr[75197]: log_channel(audit) log [DBG] : from='client.14577 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:59:14 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:59:14 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 11:59:14 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:59:14 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:59:14 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:59:14 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:59:14 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:59:14 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:59:14 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:59:14 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:59:14 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:59:14 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 26 11:59:14 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:59:14 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:59:14 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:59:14 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 11:59:14 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:59:14 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 11:59:14 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:59:14 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 11:59:14 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 11:59:14 compute-0 ceph-mgr[75197]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 11:59:14 compute-0 podman[258215]: 2025-11-26 11:59:14.398204784 +0000 UTC m=+0.060893117 container health_status b46e4f533fc070f3c487ba2bec68d1eecba141ffd4a1dcc9b4c347110dd51e0e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 26 11:59:14 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0) v1
Nov 26 11:59:14 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/7749075' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Nov 26 11:59:14 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat"} v 0) v1
Nov 26 11:59:14 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/802235339' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Nov 26 11:59:14 compute-0 ceph-mon[74928]: from='client.14569 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:59:14 compute-0 ceph-mon[74928]: pgmap v720: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 11:59:14 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/7749075' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Nov 26 11:59:14 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/802235339' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Nov 26 11:59:15 compute-0 ceph-mgr[75197]: log_channel(audit) log [DBG] : from='client.14583 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 11:59:15 compute-0 ceph-mgr[75197]: log_channel(audit) log [DBG] : from='client.14585 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
